var/home/core/zuul-output/0000755000175000017500000000000015115616502014527 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015115621751015475 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log0000644000000000000000002102103615115621742017674 0ustar rootrootDec 08 18:51:36 crc systemd[1]: Starting Kubernetes Kubelet... Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Dec 08 18:51:36 crc kubenswrapper[4998]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.683673 4998 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687929 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687952 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687959 4998 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687965 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687971 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687976 4998 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687981 4998 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687986 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687991 4998 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.687997 4998 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688003 4998 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688013 4998 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688024 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688031 4998 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688040 4998 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688047 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688053 4998 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688059 4998 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688066 4998 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688072 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688079 4998 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688086 4998 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688092 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688113 4998 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688119 4998 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688125 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688130 4998 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688136 4998 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688141 4998 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688146 4998 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688150 4998 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688156 4998 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688162 4998 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688167 4998 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688171 4998 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688177 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688182 4998 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688186 4998 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688191 4998 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688196 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688201 4998 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688206 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688211 4998 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688219 4998 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688227 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688234 4998 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688240 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688245 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688250 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688256 4998 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688262 4998 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688268 4998 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688273 4998 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688278 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688284 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688290 4998 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688297 4998 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688302 4998 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688307 4998 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688312 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688317 4998 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688322 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688327 4998 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688331 4998 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688336 4998 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688341 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688346 4998 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688351 4998 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688356 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688360 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688365 4998 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688370 4998 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688375 4998 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688380 4998 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688385 4998 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688393 4998 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688399 4998 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688404 4998 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688410 4998 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688415 4998 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688419 4998 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688424 4998 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688429 4998 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688434 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688439 4998 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.688463 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689141 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689165 4998 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689178 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689183 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689188 4998 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689193 4998 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689198 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689204 4998 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689209 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689214 4998 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689219 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689224 4998 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689229 4998 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689234 4998 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689239 4998 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689244 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689249 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689255 4998 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689261 4998 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689266 4998 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689271 4998 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689276 4998 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689281 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689287 4998 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689292 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689297 4998 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689302 4998 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689307 4998 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689312 4998 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689319 4998 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689326 4998 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689341 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689347 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689360 4998 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689371 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689376 4998 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689381 4998 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689386 4998 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689391 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689396 4998 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689401 4998 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689406 4998 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689411 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689417 4998 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689422 4998 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689427 4998 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689432 4998 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689437 4998 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689442 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689447 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689453 4998 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689458 4998 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689463 4998 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689467 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689474 4998 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689479 4998 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689484 4998 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689489 4998 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689494 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689499 4998 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689504 4998 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689509 4998 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689515 4998 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689520 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689525 4998 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689530 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689535 4998 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689539 4998 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689545 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689550 4998 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689555 4998 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689559 4998 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689564 4998 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689573 4998 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689578 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689584 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689590 4998 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689595 4998 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689600 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689605 4998 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689611 4998 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689616 4998 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689620 4998 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689625 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689630 4998 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.689635 4998 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690069 4998 flags.go:64] FLAG: --address="0.0.0.0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690089 4998 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690101 4998 flags.go:64] FLAG: --anonymous-auth="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690109 4998 flags.go:64] FLAG: --application-metrics-count-limit="100" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690116 4998 flags.go:64] FLAG: --authentication-token-webhook="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690122 4998 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690129 4998 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690138 4998 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690145 4998 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690150 4998 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690157 4998 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690163 4998 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690169 4998 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690174 4998 flags.go:64] FLAG: --cgroup-root="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690179 4998 flags.go:64] FLAG: --cgroups-per-qos="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690185 4998 flags.go:64] FLAG: --client-ca-file="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690191 4998 flags.go:64] FLAG: --cloud-config="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690196 4998 flags.go:64] FLAG: --cloud-provider="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690202 4998 flags.go:64] FLAG: --cluster-dns="[]" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690209 4998 flags.go:64] FLAG: --cluster-domain="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690215 4998 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690221 4998 flags.go:64] FLAG: --config-dir="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690226 4998 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690232 4998 flags.go:64] FLAG: --container-log-max-files="5" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690239 4998 flags.go:64] FLAG: --container-log-max-size="10Mi" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690244 4998 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690251 4998 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690257 4998 flags.go:64] FLAG: --containerd-namespace="k8s.io" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690263 4998 flags.go:64] FLAG: --contention-profiling="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690268 4998 flags.go:64] FLAG: --cpu-cfs-quota="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690280 4998 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690286 4998 flags.go:64] FLAG: --cpu-manager-policy="none" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690292 4998 flags.go:64] FLAG: --cpu-manager-policy-options="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690304 4998 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690310 4998 flags.go:64] FLAG: --enable-controller-attach-detach="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690316 4998 flags.go:64] FLAG: --enable-debugging-handlers="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690321 4998 flags.go:64] FLAG: --enable-load-reader="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690327 4998 flags.go:64] FLAG: --enable-server="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690333 4998 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690340 4998 flags.go:64] FLAG: --event-burst="100" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690346 4998 flags.go:64] FLAG: --event-qps="50" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690352 4998 flags.go:64] FLAG: --event-storage-age-limit="default=0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690357 4998 flags.go:64] FLAG: --event-storage-event-limit="default=0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690363 4998 flags.go:64] FLAG: --eviction-hard="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690369 4998 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690375 4998 flags.go:64] FLAG: --eviction-minimum-reclaim="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690381 4998 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690387 4998 flags.go:64] FLAG: --eviction-soft="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690392 4998 flags.go:64] FLAG: --eviction-soft-grace-period="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690398 4998 flags.go:64] FLAG: --exit-on-lock-contention="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690403 4998 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690409 4998 flags.go:64] FLAG: --experimental-mounter-path="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690415 4998 flags.go:64] FLAG: --fail-cgroupv1="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690420 4998 flags.go:64] FLAG: --fail-swap-on="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690425 4998 flags.go:64] FLAG: --feature-gates="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690432 4998 flags.go:64] FLAG: --file-check-frequency="20s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690438 4998 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690444 4998 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690450 4998 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690456 4998 flags.go:64] FLAG: --healthz-port="10248" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690461 4998 flags.go:64] FLAG: --help="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690466 4998 flags.go:64] FLAG: --hostname-override="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690475 4998 flags.go:64] FLAG: --housekeeping-interval="10s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690482 4998 flags.go:64] FLAG: --http-check-frequency="20s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690488 4998 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690493 4998 flags.go:64] FLAG: --image-credential-provider-config="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690500 4998 flags.go:64] FLAG: --image-gc-high-threshold="85" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690507 4998 flags.go:64] FLAG: --image-gc-low-threshold="80" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690513 4998 flags.go:64] FLAG: --image-service-endpoint="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690518 4998 flags.go:64] FLAG: --kernel-memcg-notification="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690524 4998 flags.go:64] FLAG: --kube-api-burst="100" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690530 4998 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690550 4998 flags.go:64] FLAG: --kube-api-qps="50" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690556 4998 flags.go:64] FLAG: --kube-reserved="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690569 4998 flags.go:64] FLAG: --kube-reserved-cgroup="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690574 4998 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690590 4998 flags.go:64] FLAG: --kubelet-cgroups="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690596 4998 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690602 4998 flags.go:64] FLAG: --lock-file="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690607 4998 flags.go:64] FLAG: --log-cadvisor-usage="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690619 4998 flags.go:64] FLAG: --log-flush-frequency="5s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690625 4998 flags.go:64] FLAG: --log-json-info-buffer-size="0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690633 4998 flags.go:64] FLAG: --log-json-split-stream="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690639 4998 flags.go:64] FLAG: --log-text-info-buffer-size="0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690644 4998 flags.go:64] FLAG: --log-text-split-stream="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690650 4998 flags.go:64] FLAG: --logging-format="text" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690655 4998 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690661 4998 flags.go:64] FLAG: --make-iptables-util-chains="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690667 4998 flags.go:64] FLAG: --manifest-url="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690672 4998 flags.go:64] FLAG: --manifest-url-header="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690700 4998 flags.go:64] FLAG: --max-housekeeping-interval="15s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690706 4998 flags.go:64] FLAG: --max-open-files="1000000" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690714 4998 flags.go:64] FLAG: --max-pods="110" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690719 4998 flags.go:64] FLAG: --maximum-dead-containers="-1" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690726 4998 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690731 4998 flags.go:64] FLAG: --memory-manager-policy="None" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690737 4998 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690742 4998 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690748 4998 flags.go:64] FLAG: --node-ip="192.168.126.11" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690754 4998 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690769 4998 flags.go:64] FLAG: --node-status-max-images="50" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690775 4998 flags.go:64] FLAG: --node-status-update-frequency="10s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690780 4998 flags.go:64] FLAG: --oom-score-adj="-999" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690786 4998 flags.go:64] FLAG: --pod-cidr="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690792 4998 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690802 4998 flags.go:64] FLAG: --pod-manifest-path="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690807 4998 flags.go:64] FLAG: --pod-max-pids="-1" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690813 4998 flags.go:64] FLAG: --pods-per-core="0" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690819 4998 flags.go:64] FLAG: --port="10250" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690824 4998 flags.go:64] FLAG: --protect-kernel-defaults="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690830 4998 flags.go:64] FLAG: --provider-id="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690835 4998 flags.go:64] FLAG: --qos-reserved="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690841 4998 flags.go:64] FLAG: --read-only-port="10255" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690847 4998 flags.go:64] FLAG: --register-node="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690853 4998 flags.go:64] FLAG: --register-schedulable="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690858 4998 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690868 4998 flags.go:64] FLAG: --registry-burst="10" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690873 4998 flags.go:64] FLAG: --registry-qps="5" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690879 4998 flags.go:64] FLAG: --reserved-cpus="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690885 4998 flags.go:64] FLAG: --reserved-memory="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690891 4998 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690897 4998 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690904 4998 flags.go:64] FLAG: --rotate-certificates="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690911 4998 flags.go:64] FLAG: --rotate-server-certificates="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690919 4998 flags.go:64] FLAG: --runonce="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690926 4998 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690934 4998 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690941 4998 flags.go:64] FLAG: --seccomp-default="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690948 4998 flags.go:64] FLAG: --serialize-image-pulls="true" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690955 4998 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690963 4998 flags.go:64] FLAG: --storage-driver-db="cadvisor" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690971 4998 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690978 4998 flags.go:64] FLAG: --storage-driver-password="root" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690986 4998 flags.go:64] FLAG: --storage-driver-secure="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690992 4998 flags.go:64] FLAG: --storage-driver-table="stats" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.690998 4998 flags.go:64] FLAG: --storage-driver-user="root" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691003 4998 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691025 4998 flags.go:64] FLAG: --sync-frequency="1m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691032 4998 flags.go:64] FLAG: --system-cgroups="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691048 4998 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691069 4998 flags.go:64] FLAG: --system-reserved-cgroup="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691076 4998 flags.go:64] FLAG: --tls-cert-file="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691083 4998 flags.go:64] FLAG: --tls-cipher-suites="[]" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691094 4998 flags.go:64] FLAG: --tls-min-version="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691109 4998 flags.go:64] FLAG: --tls-private-key-file="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691117 4998 flags.go:64] FLAG: --topology-manager-policy="none" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691125 4998 flags.go:64] FLAG: --topology-manager-policy-options="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691133 4998 flags.go:64] FLAG: --topology-manager-scope="container" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691140 4998 flags.go:64] FLAG: --v="2" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691151 4998 flags.go:64] FLAG: --version="false" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691159 4998 flags.go:64] FLAG: --vmodule="" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691167 4998 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.691173 4998 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691312 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691318 4998 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691324 4998 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691330 4998 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691335 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691340 4998 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691345 4998 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691350 4998 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691355 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691360 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691365 4998 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691370 4998 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691375 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691381 4998 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691386 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691391 4998 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691407 4998 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691480 4998 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691485 4998 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691490 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691502 4998 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691507 4998 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691519 4998 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691524 4998 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691530 4998 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691535 4998 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691540 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691545 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691550 4998 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691556 4998 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691561 4998 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691566 4998 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691571 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691576 4998 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691581 4998 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691586 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691591 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691596 4998 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691601 4998 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691610 4998 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691615 4998 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691620 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691625 4998 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691630 4998 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691636 4998 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691641 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691647 4998 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691653 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691659 4998 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691664 4998 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691669 4998 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691674 4998 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691679 4998 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691684 4998 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691724 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691730 4998 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691734 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691739 4998 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691744 4998 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691749 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691754 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691759 4998 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691764 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691769 4998 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691774 4998 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691780 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691786 4998 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691793 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691799 4998 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691806 4998 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691812 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691823 4998 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691829 4998 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691835 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691872 4998 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691877 4998 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691882 4998 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691888 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691892 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691901 4998 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691908 4998 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691915 4998 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691924 4998 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691931 4998 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691938 4998 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.691945 4998 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.692124 4998 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.732385 4998 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.732651 4998 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732747 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732780 4998 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732787 4998 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732794 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732801 4998 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732807 4998 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732814 4998 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732823 4998 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732829 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732835 4998 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732840 4998 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732845 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732850 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732856 4998 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732862 4998 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732868 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732873 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732877 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732882 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732888 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732893 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732898 4998 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732903 4998 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732908 4998 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732913 4998 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732918 4998 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732923 4998 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732928 4998 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732932 4998 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732939 4998 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732946 4998 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732952 4998 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732958 4998 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732962 4998 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732967 4998 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732972 4998 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732979 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732984 4998 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732989 4998 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.732995 4998 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733000 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733005 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733010 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733015 4998 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733020 4998 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733024 4998 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733030 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733035 4998 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733040 4998 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733045 4998 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733050 4998 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733055 4998 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733073 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733084 4998 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733095 4998 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733106 4998 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733112 4998 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733117 4998 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733121 4998 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733126 4998 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733131 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733136 4998 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733141 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733146 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733152 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733157 4998 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733162 4998 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733167 4998 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733172 4998 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733177 4998 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733182 4998 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733188 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733193 4998 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733198 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733203 4998 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733208 4998 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733213 4998 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733218 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733223 4998 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733228 4998 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733233 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733238 4998 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733243 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733247 4998 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733253 4998 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733258 4998 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.733267 4998 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733435 4998 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733445 4998 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733450 4998 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733456 4998 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733461 4998 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733467 4998 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733473 4998 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733479 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733485 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733491 4998 feature_gate.go:328] unrecognized feature gate: NewOLM Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733498 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733505 4998 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733511 4998 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733517 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733522 4998 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733527 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733533 4998 feature_gate.go:328] unrecognized feature gate: Example Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733537 4998 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733542 4998 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733547 4998 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733552 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733557 4998 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733562 4998 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733566 4998 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733571 4998 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733576 4998 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733581 4998 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733586 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733591 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733596 4998 feature_gate.go:328] unrecognized feature gate: PinnedImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733603 4998 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733608 4998 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733613 4998 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733619 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733625 4998 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733631 4998 feature_gate.go:328] unrecognized feature gate: DualReplica Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733637 4998 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733644 4998 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733650 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733656 4998 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733661 4998 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733666 4998 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733671 4998 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733677 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733704 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733710 4998 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733714 4998 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733719 4998 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733725 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733730 4998 feature_gate.go:328] unrecognized feature gate: GatewayAPI Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733735 4998 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733740 4998 feature_gate.go:328] unrecognized feature gate: OVNObservability Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733745 4998 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733751 4998 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733755 4998 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733762 4998 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733767 4998 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733774 4998 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733780 4998 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733786 4998 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733791 4998 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733797 4998 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733804 4998 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733809 4998 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733815 4998 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733820 4998 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733825 4998 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733830 4998 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733836 4998 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733842 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733848 4998 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733853 4998 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733858 4998 feature_gate.go:328] unrecognized feature gate: InsightsConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733863 4998 feature_gate.go:328] unrecognized feature gate: Example2 Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733868 4998 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733874 4998 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733880 4998 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733885 4998 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733890 4998 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733895 4998 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733900 4998 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733905 4998 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733910 4998 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733915 4998 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733920 4998 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Dec 08 18:51:36 crc kubenswrapper[4998]: W1208 18:51:36.733925 4998 feature_gate.go:328] unrecognized feature gate: SignatureStores Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.733934 4998 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.734454 4998 server.go:962] "Client rotation is on, will bootstrap in background" Dec 08 18:51:36 crc kubenswrapper[4998]: E1208 18:51:36.738114 4998 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.741561 4998 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.741701 4998 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.742354 4998 server.go:1019] "Starting client certificate rotation" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.742475 4998 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.742703 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.824225 4998 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:51:36 crc kubenswrapper[4998]: E1208 18:51:36.909127 4998 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.911756 4998 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.922373 4998 log.go:25] "Validated CRI v1 runtime API" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.947942 4998 log.go:25] "Validated CRI v1 image API" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.949436 4998 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.951441 4998 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2025-12-08-18-45-31-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.951473 4998 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:44 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.970083 4998 manager.go:217] Machine: {Timestamp:2025-12-08 18:51:36.968675226 +0000 UTC m=+0.616717976 CPUVendorID:AuthenticAMD NumCores:8 NumPhysicalCores:1 NumSockets:8 CpuFrequency:2799998 MemoryCapacity:25195294720 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:57933dbd-5a28-4dc8-9ba9-34a04e3c67e1 BootID:c1301796-dc2b-4ae4-9d55-0e992e137827 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:12597645312 Type:vfs Inodes:3075597 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:12597649408 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:5039058944 Type:vfs Inodes:819200 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:2519527424 Type:vfs Inodes:615119 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:44 Capacity:1073741824 Type:vfs Inodes:3075597 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:429496729600 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:c1:c4:f8 Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:c1:c4:f8 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:b9:70:93 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:64:ed:df Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:74:81:74 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:8c:36:fd Speed:-1 Mtu:1496} {Name:eth10 MacAddress:ee:ca:3d:b8:6e:8b Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:5e:de:46:14:f4:2f Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:25195294720 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.970361 4998 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.970756 4998 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972071 4998 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972127 4998 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972456 4998 topology_manager.go:138] "Creating topology manager with none policy" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972472 4998 container_manager_linux.go:306] "Creating device plugin manager" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972506 4998 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.972876 4998 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.973254 4998 state_mem.go:36] "Initialized new in-memory state store" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.973437 4998 server.go:1267] "Using root directory" path="/var/lib/kubelet" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.974127 4998 kubelet.go:491] "Attempting to sync node with API server" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.974161 4998 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.974187 4998 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.974204 4998 kubelet.go:397] "Adding apiserver pod source" Dec 08 18:51:36 crc kubenswrapper[4998]: I1208 18:51:36.974225 4998 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.179991 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.180011 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.184882 4998 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.184980 4998 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.214297 4998 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.214372 4998 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.217265 4998 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.217674 4998 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218181 4998 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218723 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218750 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218760 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218769 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218778 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218788 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218802 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218812 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218825 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218845 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.218882 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.219022 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.219235 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.219251 4998 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.220448 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.236599 4998 watchdog_linux.go:99] "Systemd watchdog is not enabled" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.236714 4998 server.go:1295] "Started kubelet" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.236915 4998 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.237030 4998 server_v1.go:47] "podresources" method="list" useActivePods=true Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.237110 4998 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.237509 4998 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 08 18:51:37 crc systemd[1]: Started Kubernetes Kubelet. Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.238182 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.187f5224dd3daaf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,LastTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.238976 4998 server.go:317] "Adding debug handlers to kubelet server" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.239858 4998 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.240005 4998 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.241229 4998 volume_manager.go:295] "The desired_state_of_world populator starts" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.241242 4998 volume_manager.go:297] "Starting Kubelet Volume Manager" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.241390 4998 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.241731 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.241894 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.242790 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.243788 4998 factory.go:55] Registering systemd factory Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.243866 4998 factory.go:223] Registration of the systemd container factory successfully Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.244215 4998 factory.go:153] Registering CRI-O factory Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.244242 4998 factory.go:223] Registration of the crio container factory successfully Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.244306 4998 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.244333 4998 factory.go:103] Registering Raw factory Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.244375 4998 manager.go:1196] Started watching for new ooms in manager Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.245025 4998 manager.go:319] Starting recovery of all containers Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291416 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291534 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291559 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291580 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291601 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291618 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291639 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.291658 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292021 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292080 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292101 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292153 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292173 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292189 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292210 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292230 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292247 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292268 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292282 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292295 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292305 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292315 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292325 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292336 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292346 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292356 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292374 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292384 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292398 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292411 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292452 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292465 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292477 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292489 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292503 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292514 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292525 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292536 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292706 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292748 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292768 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292783 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292800 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292816 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292834 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292848 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292863 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292879 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292920 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292935 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292949 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292963 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292979 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.292994 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293012 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293026 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293058 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293076 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293092 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293107 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293123 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293138 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293151 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293166 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293182 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293196 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293211 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293224 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293238 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.293256 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295321 4998 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295370 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295392 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295408 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295423 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295439 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295454 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295469 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295486 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295502 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295522 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295538 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295554 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295569 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295587 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295601 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295618 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295632 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295646 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295659 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295673 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295722 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295738 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295752 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295768 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295784 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295799 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295812 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295827 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295841 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295853 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295865 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295877 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295887 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295898 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295908 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295919 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295929 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295939 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295950 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295960 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295970 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.295981 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296005 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296016 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296027 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296038 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296048 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296058 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296069 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296080 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296095 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296114 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296128 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296143 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296161 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296176 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296190 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296208 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296224 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296235 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296257 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296288 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296315 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296343 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296365 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296378 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296401 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296416 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296438 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296461 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296486 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296510 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296533 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296558 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296570 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296581 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296592 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296606 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296617 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296629 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296639 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296649 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296659 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296671 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296708 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296720 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296730 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296740 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296750 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296765 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296775 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296784 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296795 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296804 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296816 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296827 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296836 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296853 4998 manager.go:324] Recovery completed Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.296857 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297236 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297258 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297271 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297285 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297296 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297311 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297323 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297333 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297348 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297359 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297369 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297380 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297393 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297406 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297418 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297430 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297441 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297452 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297464 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297493 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297506 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297517 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297528 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297540 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297550 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297568 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297581 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297591 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297601 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297614 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297624 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297637 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297647 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297666 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297677 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297702 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297712 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297722 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297732 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297742 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297753 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297763 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297774 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297783 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297795 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297806 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297816 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297827 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297838 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297848 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297859 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297869 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297879 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297891 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297902 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297912 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297961 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297975 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297985 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.297994 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298003 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298014 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298024 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298034 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298045 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298054 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298064 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298073 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298084 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298094 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298104 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298113 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298124 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298134 4998 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298144 4998 reconstruct.go:97] "Volume reconstruction finished" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.298152 4998 reconciler.go:26] "Reconciler: start to sync state" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.309168 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.315458 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.315510 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.315525 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.316640 4998 cpu_manager.go:222] "Starting CPU manager" policy="none" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.316741 4998 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.316868 4998 state_mem.go:36] "Initialized new in-memory state store" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.343771 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.344649 4998 policy_none.go:49] "None policy: Start" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.344742 4998 memory_manager.go:186] "Starting memorymanager" policy="None" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.344765 4998 state_mem.go:35] "Initializing new in-memory state store" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.362248 4998 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.364726 4998 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.364766 4998 status_manager.go:230] "Starting to sync pod status with apiserver" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.364800 4998 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.364813 4998 kubelet.go:2451] "Starting kubelet main sync loop" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.364862 4998 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.367370 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.394772 4998 manager.go:341] "Starting Device Plugin manager" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.394891 4998 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.394919 4998 server.go:85] "Starting device plugin registration server" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.395421 4998 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.395442 4998 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.395852 4998 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.396098 4998 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.396119 4998 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.399979 4998 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.400049 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.443769 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.465078 4998 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.465420 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.467415 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.467475 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.467488 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.468355 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.468545 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.468586 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469193 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469259 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469276 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469193 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469319 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.469332 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.471361 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.471987 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472069 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472496 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472544 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472561 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472621 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472652 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.472664 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.474617 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.474844 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.474935 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.475566 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.475605 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.475620 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476208 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476261 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476273 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476597 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476851 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.476883 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477537 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477559 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477570 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477789 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477811 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.477821 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.478657 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.478829 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.480197 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.480278 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.480298 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.495630 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.496813 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.496858 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.496873 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.496902 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.497659 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.504971 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.514507 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.536072 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.556117 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.561358 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.606910 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607051 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607147 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607196 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607244 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607287 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607329 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607358 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607392 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607422 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607457 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607489 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607551 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607610 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607658 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607759 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607813 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607848 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607882 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607921 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607966 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.607994 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.608021 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609155 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609209 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609385 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609384 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609592 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609665 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.609805 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.760992 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761064 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761132 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761174 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761202 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761222 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761228 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761299 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761335 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761361 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761368 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761384 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761410 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761412 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761442 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761458 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761472 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761482 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761503 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761504 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761535 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761543 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761566 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761576 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761585 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761607 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761609 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761631 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761659 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761714 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761747 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761778 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.761808 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.763549 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.763597 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.763610 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.763649 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.764211 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.806500 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.815885 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.837969 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: E1208 18:51:37.845463 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.858083 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.861586 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:51:37 crc kubenswrapper[4998]: W1208 18:51:37.866014 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a14caf222afb62aaabdc47808b6f944.slice/crio-cb99ed7fe5b85465c1ebfdbc69c52dbfea4a398626910137590de8aca4020736 WatchSource:0}: Error finding container cb99ed7fe5b85465c1ebfdbc69c52dbfea4a398626910137590de8aca4020736: Status 404 returned error can't find the container with id cb99ed7fe5b85465c1ebfdbc69c52dbfea4a398626910137590de8aca4020736 Dec 08 18:51:37 crc kubenswrapper[4998]: W1208 18:51:37.870302 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20c5c5b4bed930554494851fe3cb2b2a.slice/crio-82988eac6979202c23c83ae9fedd7cbcf9cbbba42d1396e2a6abf60b2e8e1883 WatchSource:0}: Error finding container 82988eac6979202c23c83ae9fedd7cbcf9cbbba42d1396e2a6abf60b2e8e1883: Status 404 returned error can't find the container with id 82988eac6979202c23c83ae9fedd7cbcf9cbbba42d1396e2a6abf60b2e8e1883 Dec 08 18:51:37 crc kubenswrapper[4998]: I1208 18:51:37.873604 4998 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:51:37 crc kubenswrapper[4998]: W1208 18:51:37.873751 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f0bc7fcb0822a2c13eb2d22cd8c0641.slice/crio-a5ac16984d18a5eab9bd282a12e161b23905eddd3d9eb8d8c5d08ba6c9d5e76c WatchSource:0}: Error finding container a5ac16984d18a5eab9bd282a12e161b23905eddd3d9eb8d8c5d08ba6c9d5e76c: Status 404 returned error can't find the container with id a5ac16984d18a5eab9bd282a12e161b23905eddd3d9eb8d8c5d08ba6c9d5e76c Dec 08 18:51:37 crc kubenswrapper[4998]: W1208 18:51:37.888817 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e08c320b1e9e2405e6e0107bdf7eeb4.slice/crio-347d2518ab36a6c60a0b338024265b90e9eebef1270d6f9fdfa45df39115fdaf WatchSource:0}: Error finding container 347d2518ab36a6c60a0b338024265b90e9eebef1270d6f9fdfa45df39115fdaf: Status 404 returned error can't find the container with id 347d2518ab36a6c60a0b338024265b90e9eebef1270d6f9fdfa45df39115fdaf Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.279504 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.280410 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.280621 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.284058 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.287544 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.287602 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.287620 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.287669 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.288168 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.388192 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"31a95ac828e21e476e0966c537ca5cf7003f5aef4dfdb99684d24810a77e7c6c"} Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.389860 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"347d2518ab36a6c60a0b338024265b90e9eebef1270d6f9fdfa45df39115fdaf"} Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.391334 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a5ac16984d18a5eab9bd282a12e161b23905eddd3d9eb8d8c5d08ba6c9d5e76c"} Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.395889 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"cb99ed7fe5b85465c1ebfdbc69c52dbfea4a398626910137590de8aca4020736"} Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.399108 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"82988eac6979202c23c83ae9fedd7cbcf9cbbba42d1396e2a6abf60b2e8e1883"} Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.478361 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.646716 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.765200 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:38 crc kubenswrapper[4998]: I1208 18:51:38.964546 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:38 crc kubenswrapper[4998]: E1208 18:51:38.966464 4998 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.088766 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.091974 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.092110 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.092192 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.092267 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:39 crc kubenswrapper[4998]: E1208 18:51:39.093378 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.222100 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.407783 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.407852 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.409380 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2" exitCode=0 Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.409415 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.409727 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.410805 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.410854 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.410870 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.410908 4998 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267" exitCode=0 Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.410995 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.411122 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: E1208 18:51:39.411296 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.411699 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.411729 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.411747 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: E1208 18:51:39.411945 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.413422 4998 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2" exitCode=0 Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.413516 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.413717 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.414146 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.414178 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.414180 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.414189 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: E1208 18:51:39.414382 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.415187 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.415242 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.415255 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.415794 4998 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f" exitCode=0 Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.415831 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f"} Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.416044 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.416788 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.416877 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:39 crc kubenswrapper[4998]: I1208 18:51:39.416942 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:39 crc kubenswrapper[4998]: E1208 18:51:39.417253 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.245027 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:40 crc kubenswrapper[4998]: E1208 18:51:40.247651 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.440287 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd"} Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.443109 4998 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab" exitCode=0 Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.443169 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab"} Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.443340 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.444295 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.444326 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.444335 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:40 crc kubenswrapper[4998]: E1208 18:51:40.444527 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.448420 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c"} Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.452053 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078"} Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.452186 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.452647 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.452666 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.452674 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:40 crc kubenswrapper[4998]: E1208 18:51:40.452861 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.694402 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.695977 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.696018 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.696028 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:40 crc kubenswrapper[4998]: I1208 18:51:40.696056 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:40 crc kubenswrapper[4998]: E1208 18:51:40.696412 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.145:6443: connect: connection refused" node="crc" Dec 08 18:51:40 crc kubenswrapper[4998]: E1208 18:51:40.915088 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.221891 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.303765 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.467282 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.467329 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.467447 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.468659 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.468711 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.468737 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.469454 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.474503 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.474540 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.474842 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.476843 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.476888 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.476903 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.477241 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.478511 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.478557 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.480188 4998 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c" exitCode=0 Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.480486 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c"} Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.480537 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.480830 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481404 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481436 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481501 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481514 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481526 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:41 crc kubenswrapper[4998]: I1208 18:51:41.481642 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.481892 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.482375 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.494808 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:41 crc kubenswrapper[4998]: E1208 18:51:41.649266 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.145:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.227532 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.263949 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.272596 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.490834 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"50bcaf87a8bf9c3ecd3a747f711577260847aced62a63f664c2aa36cbbbbf1ad"} Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.490897 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38"} Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.491064 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.496582 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe"} Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.496627 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc"} Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.496675 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.496757 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.497607 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.497867 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.497917 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.497929 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.498184 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.498222 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:42 crc kubenswrapper[4998]: E1208 18:51:42.498185 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.498249 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:42 crc kubenswrapper[4998]: E1208 18:51:42.498806 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.499074 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.499098 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:42 crc kubenswrapper[4998]: I1208 18:51:42.499134 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:42 crc kubenswrapper[4998]: E1208 18:51:42.499416 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.033599 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.279960 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506029 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506083 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506313 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2"} Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506340 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f"} Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506445 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506561 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506775 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506796 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.506805 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:43 crc kubenswrapper[4998]: E1208 18:51:43.507185 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.507837 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.507863 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.507878 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:43 crc kubenswrapper[4998]: E1208 18:51:43.508104 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.897138 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.898312 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.898366 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.898376 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:43 crc kubenswrapper[4998]: I1208 18:51:43.898403 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.515682 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533"} Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.515910 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.516231 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.516762 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.516852 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.516886 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.517153 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.517226 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:44 crc kubenswrapper[4998]: I1208 18:51:44.517242 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:44 crc kubenswrapper[4998]: E1208 18:51:44.518024 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:44 crc kubenswrapper[4998]: E1208 18:51:44.518370 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.002517 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.471074 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.471451 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.472480 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.472518 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.472527 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:45 crc kubenswrapper[4998]: E1208 18:51:45.472822 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.512133 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.513359 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.513465 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.521894 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.521966 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.521985 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:45 crc kubenswrapper[4998]: E1208 18:51:45.522580 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.523407 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.523566 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524255 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524314 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524326 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524388 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524410 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.524424 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:45 crc kubenswrapper[4998]: E1208 18:51:45.524828 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:45 crc kubenswrapper[4998]: E1208 18:51:45.525376 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:45 crc kubenswrapper[4998]: I1208 18:51:45.651268 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.339044 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.527404 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.527406 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528759 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528826 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528839 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528885 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528933 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:46 crc kubenswrapper[4998]: I1208 18:51:46.528954 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:46 crc kubenswrapper[4998]: E1208 18:51:46.529297 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:46 crc kubenswrapper[4998]: E1208 18:51:46.529876 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:47 crc kubenswrapper[4998]: E1208 18:51:47.400773 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.533836 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.534260 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.536018 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.536097 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.536126 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:50 crc kubenswrapper[4998]: E1208 18:51:50.536746 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.542149 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.542920 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.543919 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.543964 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:50 crc kubenswrapper[4998]: I1208 18:51:50.543983 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:50 crc kubenswrapper[4998]: E1208 18:51:50.544414 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:52 crc kubenswrapper[4998]: E1208 18:51:52.748259 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{crc.187f5224dd3daaf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,LastTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:51:52 crc kubenswrapper[4998]: I1208 18:51:52.912578 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 18:51:52 crc kubenswrapper[4998]: I1208 18:51:52.912765 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 18:51:53 crc kubenswrapper[4998]: E1208 18:51:53.042875 4998 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": net/http: TLS handshake timeout" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.231075 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Dec 08 18:51:53 crc kubenswrapper[4998]: E1208 18:51:53.449197 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.554900 4998 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.555022 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.896304 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.896676 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:53 crc kubenswrapper[4998]: E1208 18:51:53.900152 4998 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.982529 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.982591 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:53 crc kubenswrapper[4998]: I1208 18:51:53.982609 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:53 crc kubenswrapper[4998]: E1208 18:51:53.983343 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.096649 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.624009 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.625215 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.625292 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.625305 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:54 crc kubenswrapper[4998]: E1208 18:51:54.626034 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:54 crc kubenswrapper[4998]: I1208 18:51:54.640143 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.002897 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" start-of-body= Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.003051 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/livez\": context deadline exceeded" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.010043 4998 trace.go:236] Trace[91991903]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:45.008) (total time: 10001ms): Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[91991903]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:51:55.009) Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[91991903]: [10.001724028s] [10.001724028s] END Dec 08 18:51:55 crc kubenswrapper[4998]: E1208 18:51:55.010080 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.625922 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.626833 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.626960 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.627054 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:51:55 crc kubenswrapper[4998]: E1208 18:51:55.627541 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.750170 4998 trace.go:236] Trace[725152168]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:45.747) (total time: 10002ms): Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[725152168]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (18:51:55.750) Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[725152168]: [10.002841106s] [10.002841106s] END Dec 08 18:51:55 crc kubenswrapper[4998]: E1208 18:51:55.750544 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:51:55 crc kubenswrapper[4998]: I1208 18:51:55.935921 4998 trace.go:236] Trace[1093366059]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:45.934) (total time: 10001ms): Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[1093366059]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:51:55.935) Dec 08 18:51:55 crc kubenswrapper[4998]: Trace[1093366059]: [10.00187792s] [10.00187792s] END Dec 08 18:51:55 crc kubenswrapper[4998]: E1208 18:51:55.936003 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:51:56 crc kubenswrapper[4998]: I1208 18:51:56.088058 4998 trace.go:236] Trace[1646828984]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (08-Dec-2025 18:51:46.086) (total time: 10001ms): Dec 08 18:51:56 crc kubenswrapper[4998]: Trace[1646828984]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (18:51:56.087) Dec 08 18:51:56 crc kubenswrapper[4998]: Trace[1646828984]: [10.001888522s] [10.001888522s] END Dec 08 18:51:56 crc kubenswrapper[4998]: E1208 18:51:56.088111 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:51:56 crc kubenswrapper[4998]: I1208 18:51:56.369977 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:51:56 crc kubenswrapper[4998]: I1208 18:51:56.370064 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Dec 08 18:51:57 crc kubenswrapper[4998]: E1208 18:51:57.416537 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:51:59 crc kubenswrapper[4998]: E1208 18:51:59.852978 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.011405 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.011787 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.012810 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.012878 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.012902 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:00 crc kubenswrapper[4998]: E1208 18:52:00.013619 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.019754 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.301206 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.302519 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.302604 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.302625 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.302661 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:00 crc kubenswrapper[4998]: E1208 18:52:00.318054 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.664880 4998 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.665396 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.666485 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.666561 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:00 crc kubenswrapper[4998]: I1208 18:52:00.666578 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:00 crc kubenswrapper[4998]: E1208 18:52:00.667258 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.381101 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.405821 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33312->192.168.126.11:17697: read: connection reset by peer" start-of-body= Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.405923 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33312->192.168.126.11:17697: read: connection reset by peer" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.406464 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.406603 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.454643 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.454954 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.455783 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.455831 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.455842 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:01 crc kubenswrapper[4998]: E1208 18:52:01.456853 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.460747 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.531755 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.549672 4998 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.671544 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.673996 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="50bcaf87a8bf9c3ecd3a747f711577260847aced62a63f664c2aa36cbbbbf1ad" exitCode=255 Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674049 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"50bcaf87a8bf9c3ecd3a747f711577260847aced62a63f664c2aa36cbbbbf1ad"} Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674211 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674218 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674844 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674845 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674898 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674913 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674870 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.674958 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:01 crc kubenswrapper[4998]: E1208 18:52:01.675362 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:01 crc kubenswrapper[4998]: E1208 18:52:01.675534 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:01 crc kubenswrapper[4998]: I1208 18:52:01.675799 4998 scope.go:117] "RemoveContainer" containerID="50bcaf87a8bf9c3ecd3a747f711577260847aced62a63f664c2aa36cbbbbf1ad" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.226221 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.236344 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.678449 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.681774 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497"} Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.682019 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.682660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.682907 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:02 crc kubenswrapper[4998]: I1208 18:52:02.683122 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.683836 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.754969 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224dd3daaf2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,LastTimestamp:2025-12-08 18:51:37.236630258 +0000 UTC m=+0.884672959,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.759891 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.765712 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.770646 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.774542 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e6e7292c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.3987331 +0000 UTC m=+1.046775790,LastTimestamp:2025-12-08 18:51:37.3987331 +0000 UTC m=+1.046775790,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.792547 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.467447999 +0000 UTC m=+1.115490689,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.799561 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.4674814 +0000 UTC m=+1.115524090,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.804957 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.46749392 +0000 UTC m=+1.115536600,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.810773 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.469241616 +0000 UTC m=+1.117284306,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.815547 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.469269637 +0000 UTC m=+1.117312327,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.821461 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.469282607 +0000 UTC m=+1.117325297,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.834748 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.469308238 +0000 UTC m=+1.117350918,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.839360 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.469326878 +0000 UTC m=+1.117369568,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.843169 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.469337369 +0000 UTC m=+1.117380059,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.847575 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.472531253 +0000 UTC m=+1.120573933,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.856158 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.472552343 +0000 UTC m=+1.120595033,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.860830 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.472566454 +0000 UTC m=+1.120609144,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.865673 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.472636656 +0000 UTC m=+1.120679346,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.870226 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.472657726 +0000 UTC m=+1.120700416,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.945213 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.472670116 +0000 UTC m=+1.120712806,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.949379 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.475587545 +0000 UTC m=+1.123630235,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.955077 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.475613945 +0000 UTC m=+1.123656635,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.963446 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f19c01\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f19c01 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315531777 +0000 UTC m=+0.963574467,LastTimestamp:2025-12-08 18:51:37.475626826 +0000 UTC m=+1.123669526,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.968693 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f10970\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f10970 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315494256 +0000 UTC m=+0.963536946,LastTimestamp:2025-12-08 18:51:37.476233001 +0000 UTC m=+1.124275691,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.973917 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.187f5224e1f16335\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.187f5224e1f16335 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.315517237 +0000 UTC m=+0.963559937,LastTimestamp:2025-12-08 18:51:37.476267862 +0000 UTC m=+1.124310552,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.979728 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225033d7b85 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.874152325 +0000 UTC m=+1.522195015,LastTimestamp:2025-12-08 18:51:37.874152325 +0000 UTC m=+1.522195015,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.985126 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5225033de6f2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.874179826 +0000 UTC m=+1.522222516,LastTimestamp:2025-12-08 18:51:37.874179826 +0000 UTC m=+1.522222516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.990145 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5225039706f0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.88002072 +0000 UTC m=+1.528063420,LastTimestamp:2025-12-08 18:51:37.88002072 +0000 UTC m=+1.528063420,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:02 crc kubenswrapper[4998]: E1208 18:52:02.994780 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f5225046041fa openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.89320857 +0000 UTC m=+1.541251250,LastTimestamp:2025-12-08 18:51:37.89320857 +0000 UTC m=+1.541251250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:02.999981 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225049c85e2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:37.897158114 +0000 UTC m=+1.545200804,LastTimestamp:2025-12-08 18:51:37.897158114 +0000 UTC m=+1.545200804,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.004618 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f52253ca4712e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.837201198 +0000 UTC m=+2.485243888,LastTimestamp:2025-12-08 18:51:38.837201198 +0000 UTC m=+2.485243888,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.009451 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f52253ca4c19d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.837221789 +0000 UTC m=+2.485264499,LastTimestamp:2025-12-08 18:51:38.837221789 +0000 UTC m=+2.485264499,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.016489 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52253ca5c6e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.837288681 +0000 UTC m=+2.485331381,LastTimestamp:2025-12-08 18:51:38.837288681 +0000 UTC m=+2.485331381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.020999 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f52253ca5a961 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.837281121 +0000 UTC m=+2.485323811,LastTimestamp:2025-12-08 18:51:38.837281121 +0000 UTC m=+2.485323811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.025797 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52253ca6ab50 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.837347152 +0000 UTC m=+2.485389842,LastTimestamp:2025-12-08 18:51:38.837347152 +0000 UTC m=+2.485389842,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.030223 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f52253dbaedac openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.855452076 +0000 UTC m=+2.503494766,LastTimestamp:2025-12-08 18:51:38.855452076 +0000 UTC m=+2.503494766,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.035038 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52253e0eb5ec openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.860942828 +0000 UTC m=+2.508985518,LastTimestamp:2025-12-08 18:51:38.860942828 +0000 UTC m=+2.508985518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.040094 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f52253e15cd5d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.861407581 +0000 UTC m=+2.509450271,LastTimestamp:2025-12-08 18:51:38.861407581 +0000 UTC m=+2.509450271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.047199 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f52253e191460 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.861622368 +0000 UTC m=+2.509665058,LastTimestamp:2025-12-08 18:51:38.861622368 +0000 UTC m=+2.509665058,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.053990 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52253e51f513 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.865349907 +0000 UTC m=+2.513392617,LastTimestamp:2025-12-08 18:51:38.865349907 +0000 UTC m=+2.513392617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.058925 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52253e5919c3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:38.865818051 +0000 UTC m=+2.513860741,LastTimestamp:2025-12-08 18:51:38.865818051 +0000 UTC m=+2.513860741,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.063742 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52254e7b77af openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.136505775 +0000 UTC m=+2.784548465,LastTimestamp:2025-12-08 18:51:39.136505775 +0000 UTC m=+2.784548465,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.068597 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52254faf2a20 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.156671008 +0000 UTC m=+2.804713718,LastTimestamp:2025-12-08 18:51:39.156671008 +0000 UTC m=+2.804713718,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.079480 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52254fd1eb80 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.158948736 +0000 UTC m=+2.806991426,LastTimestamp:2025-12-08 18:51:39.158948736 +0000 UTC m=+2.806991426,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.090942 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52255ef869d2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.413129682 +0000 UTC m=+3.061172372,LastTimestamp:2025-12-08 18:51:39.413129682 +0000 UTC m=+3.061172372,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.095179 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f52255f067bf6 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.41405183 +0000 UTC m=+3.062094520,LastTimestamp:2025-12-08 18:51:39.41405183 +0000 UTC m=+3.062094520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.100094 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f52255f1cc02b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.415511083 +0000 UTC m=+3.063553783,LastTimestamp:2025-12-08 18:51:39.415511083 +0000 UTC m=+3.063553783,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.104418 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f52255f55f5b7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:39.419260343 +0000 UTC m=+3.067303033,LastTimestamp:2025-12-08 18:51:39.419260343 +0000 UTC m=+3.067303033,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.109406 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f52258deeed4b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.201037131 +0000 UTC m=+3.849079811,LastTimestamp:2025-12-08 18:51:40.201037131 +0000 UTC m=+3.849079811,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.115268 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f52258def099d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.201044381 +0000 UTC m=+3.849087071,LastTimestamp:2025-12-08 18:51:40.201044381 +0000 UTC m=+3.849087071,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.119143 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52258df8464b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.201649739 +0000 UTC m=+3.849692429,LastTimestamp:2025-12-08 18:51:40.201649739 +0000 UTC m=+3.849692429,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.123348 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f52258e05f61b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.202546715 +0000 UTC m=+3.850589405,LastTimestamp:2025-12-08 18:51:40.202546715 +0000 UTC m=+3.850589405,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.127345 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f522590945c57 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.245433431 +0000 UTC m=+3.893476111,LastTimestamp:2025-12-08 18:51:40.245433431 +0000 UTC m=+3.893476111,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.139409 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.187f5225909aab1d openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.245846813 +0000 UTC m=+3.893889503,LastTimestamp:2025-12-08 18:51:40.245846813 +0000 UTC m=+3.893889503,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.144371 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f522590b16478 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.247336056 +0000 UTC m=+3.895378746,LastTimestamp:2025-12-08 18:51:40.247336056 +0000 UTC m=+3.895378746,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.149882 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f52259138d78e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.256212878 +0000 UTC m=+3.904255588,LastTimestamp:2025-12-08 18:51:40.256212878 +0000 UTC m=+3.904255588,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.154309 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522591531893 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.257933459 +0000 UTC m=+3.905976149,LastTimestamp:2025-12-08 18:51:40.257933459 +0000 UTC m=+3.905976149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.219243 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5225922a59ca openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.272040394 +0000 UTC m=+3.920083094,LastTimestamp:2025-12-08 18:51:40.272040394 +0000 UTC m=+3.920083094,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.232076 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52259c8839fb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.445964795 +0000 UTC m=+4.094007475,LastTimestamp:2025-12-08 18:51:40.445964795 +0000 UTC m=+4.094007475,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.245023 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.254334 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f52259cce4874 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.45055602 +0000 UTC m=+4.098598720,LastTimestamp:2025-12-08 18:51:40.45055602 +0000 UTC m=+4.098598720,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.259375 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5225a02fad6f openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.507270511 +0000 UTC m=+4.155313201,LastTimestamp:2025-12-08 18:51:40.507270511 +0000 UTC m=+4.155313201,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.269239 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5225a04258e9 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.508494057 +0000 UTC m=+4.156536747,LastTimestamp:2025-12-08 18:51:40.508494057 +0000 UTC m=+4.156536747,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.326398 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225a4b1c4f2 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.582905074 +0000 UTC m=+4.230947764,LastTimestamp:2025-12-08 18:51:40.582905074 +0000 UTC m=+4.230947764,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.375774 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225a6c7fe0e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.617915918 +0000 UTC m=+4.265958608,LastTimestamp:2025-12-08 18:51:40.617915918 +0000 UTC m=+4.265958608,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.389796 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225a6e2645b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.619646043 +0000 UTC m=+4.267688733,LastTimestamp:2025-12-08 18:51:40.619646043 +0000 UTC m=+4.267688733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.395552 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225a71e8921 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.623587617 +0000 UTC m=+4.271630307,LastTimestamp:2025-12-08 18:51:40.623587617 +0000 UTC m=+4.271630307,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.400490 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225ac38a38b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.709184395 +0000 UTC m=+4.357227085,LastTimestamp:2025-12-08 18:51:40.709184395 +0000 UTC m=+4.357227085,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.405620 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225ac4b9d90 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.710428048 +0000 UTC m=+4.358470738,LastTimestamp:2025-12-08 18:51:40.710428048 +0000 UTC m=+4.358470738,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.414523 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5225b6493bae openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.878044078 +0000 UTC m=+4.526086768,LastTimestamp:2025-12-08 18:51:40.878044078 +0000 UTC m=+4.526086768,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.427142 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5225b8b63d6e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.918742382 +0000 UTC m=+4.566785082,LastTimestamp:2025-12-08 18:51:40.918742382 +0000 UTC m=+4.566785082,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.435426 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5225bad56137 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:40.954337591 +0000 UTC m=+4.602380281,LastTimestamp:2025-12-08 18:51:40.954337591 +0000 UTC m=+4.602380281,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.440215 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5225c1b5cf27 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.069709095 +0000 UTC m=+4.717751775,LastTimestamp:2025-12-08 18:51:41.069709095 +0000 UTC m=+4.717751775,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.456170 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225d0c2c05b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.322215515 +0000 UTC m=+4.970258205,LastTimestamp:2025-12-08 18:51:41.322215515 +0000 UTC m=+4.970258205,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.473053 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225d1b4fe0f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.338091023 +0000 UTC m=+4.986133713,LastTimestamp:2025-12-08 18:51:41.338091023 +0000 UTC m=+4.986133713,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.480914 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225d1c8b043 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.339381827 +0000 UTC m=+4.987424517,LastTimestamp:2025-12-08 18:51:41.339381827 +0000 UTC m=+4.987424517,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.488523 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225d5cf7525 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.406934309 +0000 UTC m=+5.054976989,LastTimestamp:2025-12-08 18:51:41.406934309 +0000 UTC m=+5.054976989,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.493177 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.187f5225d6c3d564 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.422949732 +0000 UTC m=+5.070992422,LastTimestamp:2025-12-08 18:51:41.422949732 +0000 UTC m=+5.070992422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.580624 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5225da612243 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.483590211 +0000 UTC m=+5.131632901,LastTimestamp:2025-12-08 18:51:41.483590211 +0000 UTC m=+5.131632901,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.630205 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225eba7750c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.773411596 +0000 UTC m=+5.421454296,LastTimestamp:2025-12-08 18:51:41.773411596 +0000 UTC m=+5.421454296,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.701888 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225f323ab0e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.898992398 +0000 UTC m=+5.547035098,LastTimestamp:2025-12-08 18:51:41.898992398 +0000 UTC m=+5.547035098,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.706275 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225f336b400 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.900239872 +0000 UTC m=+5.548282572,LastTimestamp:2025-12-08 18:51:41.900239872 +0000 UTC m=+5.548282572,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.711218 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522601a08f95 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.142058389 +0000 UTC m=+5.790101079,LastTimestamp:2025-12-08 18:51:42.142058389 +0000 UTC m=+5.790101079,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.716225 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522601ff3c8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.148263053 +0000 UTC m=+5.796305743,LastTimestamp:2025-12-08 18:51:42.148263053 +0000 UTC m=+5.796305743,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.717779 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.718493 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.720442 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" exitCode=255 Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.720572 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497"} Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.720765 4998 scope.go:117] "RemoveContainer" containerID="50bcaf87a8bf9c3ecd3a747f711577260847aced62a63f664c2aa36cbbbbf1ad" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.720979 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.721014 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522602b7dd28 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.160362792 +0000 UTC m=+5.808405492,LastTimestamp:2025-12-08 18:51:42.160362792 +0000 UTC m=+5.808405492,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.722157 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.722194 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.722208 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.722671 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:03 crc kubenswrapper[4998]: I1208 18:52:03.723023 4998 scope.go:117] "RemoveContainer" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.723259 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.730565 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522602d97cc4 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.16256634 +0000 UTC m=+5.810609040,LastTimestamp:2025-12-08 18:51:42.16256634 +0000 UTC m=+5.810609040,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.739355 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522603daca65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.179428965 +0000 UTC m=+5.827471655,LastTimestamp:2025-12-08 18:51:42.179428965 +0000 UTC m=+5.827471655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.827276 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52261449d5ba openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.455141818 +0000 UTC m=+6.103184498,LastTimestamp:2025-12-08 18:51:42.455141818 +0000 UTC m=+6.103184498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.836471 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52261511321c openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.468207132 +0000 UTC m=+6.116249822,LastTimestamp:2025-12-08 18:51:42.468207132 +0000 UTC m=+6.116249822,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.842544 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52261523f233 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.469435955 +0000 UTC m=+6.117478645,LastTimestamp:2025-12-08 18:51:42.469435955 +0000 UTC m=+6.117478645,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.852112 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522627ab324e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.780289614 +0000 UTC m=+6.428332304,LastTimestamp:2025-12-08 18:51:42.780289614 +0000 UTC m=+6.428332304,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.857578 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f522628823534 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.794380596 +0000 UTC m=+6.442423316,LastTimestamp:2025-12-08 18:51:42.794380596 +0000 UTC m=+6.442423316,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.872744 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5226289c152b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.796076331 +0000 UTC m=+6.444119021,LastTimestamp:2025-12-08 18:51:42.796076331 +0000 UTC m=+6.444119021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.884317 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f5226496ff49a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:43.346832538 +0000 UTC m=+6.994875218,LastTimestamp:2025-12-08 18:51:43.346832538 +0000 UTC m=+6.994875218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.893225 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52264a4645eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:43.360878059 +0000 UTC m=+7.008920759,LastTimestamp:2025-12-08 18:51:43.360878059 +0000 UTC m=+7.008920759,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.929371 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52264a5fd27b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:43.362552443 +0000 UTC m=+7.010595133,LastTimestamp:2025-12-08 18:51:43.362552443 +0000 UTC m=+7.010595133,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.934255 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52265ba7de0e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:43.65248667 +0000 UTC m=+7.300529360,LastTimestamp:2025-12-08 18:51:43.65248667 +0000 UTC m=+7.300529360,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.961970 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.187f52265cb0d2eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:43.669850859 +0000 UTC m=+7.317893549,LastTimestamp:2025-12-08 18:51:43.669850859 +0000 UTC m=+7.317893549,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.967738 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:52:03 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f5228839bb9af openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 18:52:03 crc kubenswrapper[4998]: body: Dec 08 18:52:03 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:52.912714159 +0000 UTC m=+16.560757059,LastTimestamp:2025-12-08 18:51:52.912714159 +0000 UTC m=+16.560757059,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:03 crc kubenswrapper[4998]: > Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.975187 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5228839f0895 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:52.912930965 +0000 UTC m=+16.560973665,LastTimestamp:2025-12-08 18:51:52.912930965 +0000 UTC m=+16.560973665,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.981720 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Dec 08 18:52:03 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-controller-manager-crc.187f5228a9e406f5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers) Dec 08 18:52:03 crc kubenswrapper[4998]: body: Dec 08 18:52:03 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:53.554986741 +0000 UTC m=+17.203029431,LastTimestamp:2025-12-08 18:51:53.554986741 +0000 UTC m=+17.203029431,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:03 crc kubenswrapper[4998]: > Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.985878 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.187f5228a9e511ea openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:53.555055082 +0000 UTC m=+17.203097772,LastTimestamp:2025-12-08 18:51:53.555055082 +0000 UTC m=+17.203097772,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:03 crc kubenswrapper[4998]: E1208 18:52:03.990841 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:52:03 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f52290032ea7e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: Get "https://192.168.126.11:6443/livez": context deadline exceeded Dec 08 18:52:03 crc kubenswrapper[4998]: body: Dec 08 18:52:03 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:55.002997374 +0000 UTC m=+18.651040064,LastTimestamp:2025-12-08 18:51:55.002997374 +0000 UTC m=+18.651040064,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:03 crc kubenswrapper[4998]: > Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.002014 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5229003591d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://192.168.126.11:6443/livez\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:55.003171289 +0000 UTC m=+18.651213989,LastTimestamp:2025-12-08 18:51:55.003171289 +0000 UTC m=+18.651213989,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.009003 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:52:04 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f522951ae45e1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Dec 08 18:52:04 crc kubenswrapper[4998]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Dec 08 18:52:04 crc kubenswrapper[4998]: Dec 08 18:52:04 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:56.370036193 +0000 UTC m=+20.018078893,LastTimestamp:2025-12-08 18:51:56.370036193 +0000 UTC m=+20.018078893,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:04 crc kubenswrapper[4998]: > Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.016255 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522951af2692 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:56.370093714 +0000 UTC m=+20.018136424,LastTimestamp:2025-12-08 18:51:56.370093714 +0000 UTC m=+20.018136424,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.023167 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:52:04 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f522a7dd76aed openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:33312->192.168.126.11:17697: read: connection reset by peer Dec 08 18:52:04 crc kubenswrapper[4998]: body: Dec 08 18:52:04 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:01.405897453 +0000 UTC m=+25.053940143,LastTimestamp:2025-12-08 18:52:01.405897453 +0000 UTC m=+25.053940143,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:04 crc kubenswrapper[4998]: > Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.030360 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522a7dd84cc1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:33312->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:01.405955265 +0000 UTC m=+25.053997955,LastTimestamp:2025-12-08 18:52:01.405955265 +0000 UTC m=+25.053997955,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.036330 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Dec 08 18:52:04 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f522a7de1b9b5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Dec 08 18:52:04 crc kubenswrapper[4998]: body: Dec 08 18:52:04 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:01.406572981 +0000 UTC m=+25.054615691,LastTimestamp:2025-12-08 18:52:01.406572981 +0000 UTC m=+25.054615691,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:52:04 crc kubenswrapper[4998]: > Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.042209 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522a7de31983 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:01.406663043 +0000 UTC m=+25.054705753,LastTimestamp:2025-12-08 18:52:01.406663043 +0000 UTC m=+25.054705753,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.060836 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5225f336b400\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225f336b400 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.900239872 +0000 UTC m=+5.548282572,LastTimestamp:2025-12-08 18:52:01.677055856 +0000 UTC m=+25.325098546,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.067132 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522601ff3c8d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522601ff3c8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.148263053 +0000 UTC m=+5.796305743,LastTimestamp:2025-12-08 18:52:02.140383956 +0000 UTC m=+25.788426646,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.072125 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522603daca65\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522603daca65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.179428965 +0000 UTC m=+5.827471655,LastTimestamp:2025-12-08 18:52:02.151219652 +0000 UTC m=+25.799262352,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: E1208 18:52:04.077175 4998 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:04 crc kubenswrapper[4998]: I1208 18:52:04.230264 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:04 crc kubenswrapper[4998]: I1208 18:52:04.724858 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:52:05 crc kubenswrapper[4998]: E1208 18:52:05.043500 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:52:05 crc kubenswrapper[4998]: I1208 18:52:05.240050 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:06 crc kubenswrapper[4998]: E1208 18:52:06.001895 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:52:06 crc kubenswrapper[4998]: I1208 18:52:06.282091 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:06 crc kubenswrapper[4998]: E1208 18:52:06.858638 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.225930 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.319217 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.320767 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.320834 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.320850 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:07 crc kubenswrapper[4998]: I1208 18:52:07.320923 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:07 crc kubenswrapper[4998]: E1208 18:52:07.331445 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:07 crc kubenswrapper[4998]: E1208 18:52:07.417559 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:08 crc kubenswrapper[4998]: I1208 18:52:08.224564 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:08 crc kubenswrapper[4998]: E1208 18:52:08.224763 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:52:09 crc kubenswrapper[4998]: I1208 18:52:09.226268 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:10 crc kubenswrapper[4998]: I1208 18:52:10.225741 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:11 crc kubenswrapper[4998]: I1208 18:52:11.226071 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.228910 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.682497 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.683185 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.684421 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.684476 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.684489 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.685029 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.685453 4998 scope.go:117] "RemoveContainer" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.685795 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.691257 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522b07f6e685\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:12.685750603 +0000 UTC m=+36.333793293,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.912076 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.912467 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.913567 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.913624 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.913634 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.914081 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:12 crc kubenswrapper[4998]: I1208 18:52:12.914345 4998 scope.go:117] "RemoveContainer" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.914559 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:12 crc kubenswrapper[4998]: E1208 18:52:12.919516 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522b07f6e685\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:12.914520967 +0000 UTC m=+36.562563657,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:13 crc kubenswrapper[4998]: I1208 18:52:13.229729 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:13 crc kubenswrapper[4998]: E1208 18:52:13.867225 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.227314 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.331903 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.333219 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.333289 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.333321 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:14 crc kubenswrapper[4998]: I1208 18:52:14.333365 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:14 crc kubenswrapper[4998]: E1208 18:52:14.350887 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:15 crc kubenswrapper[4998]: I1208 18:52:15.227479 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:16 crc kubenswrapper[4998]: I1208 18:52:16.223613 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:17 crc kubenswrapper[4998]: I1208 18:52:17.229286 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:17 crc kubenswrapper[4998]: E1208 18:52:17.417855 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:18 crc kubenswrapper[4998]: I1208 18:52:18.226746 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:19 crc kubenswrapper[4998]: I1208 18:52:19.229634 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:20 crc kubenswrapper[4998]: I1208 18:52:20.229964 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:20 crc kubenswrapper[4998]: E1208 18:52:20.874117 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.226286 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.351242 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.352988 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.353138 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.353207 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:21 crc kubenswrapper[4998]: I1208 18:52:21.353249 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:21 crc kubenswrapper[4998]: E1208 18:52:21.366944 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:22 crc kubenswrapper[4998]: I1208 18:52:22.228517 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:23 crc kubenswrapper[4998]: I1208 18:52:23.225351 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:23 crc kubenswrapper[4998]: E1208 18:52:23.238762 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Dec 08 18:52:24 crc kubenswrapper[4998]: I1208 18:52:24.227790 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.227457 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.365558 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.367220 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.367294 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.367325 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.367964 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.368391 4998 scope.go:117] "RemoveContainer" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.381520 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f5225f336b400\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f5225f336b400 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:41.900239872 +0000 UTC m=+5.548282572,LastTimestamp:2025-12-08 18:52:25.371131726 +0000 UTC m=+49.019174476,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.481055 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.481374 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.483241 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.483314 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.483329 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.483871 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.659095 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522601ff3c8d\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522601ff3c8d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.148263053 +0000 UTC m=+5.796305743,LastTimestamp:2025-12-08 18:52:25.652945229 +0000 UTC m=+49.300987919,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.673330 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522603daca65\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522603daca65 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:51:42.179428965 +0000 UTC m=+5.827471655,LastTimestamp:2025-12-08 18:52:25.66620137 +0000 UTC m=+49.314244060,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.796638 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.798963 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df"} Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.799230 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.799906 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.799940 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:25 crc kubenswrapper[4998]: I1208 18:52:25.799949 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:25 crc kubenswrapper[4998]: E1208 18:52:25.800417 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:26 crc kubenswrapper[4998]: I1208 18:52:26.230090 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:26 crc kubenswrapper[4998]: E1208 18:52:26.419501 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.230222 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:27 crc kubenswrapper[4998]: E1208 18:52:27.418562 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.807088 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.807902 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.809359 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" exitCode=255 Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.809427 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df"} Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.809484 4998 scope.go:117] "RemoveContainer" containerID="7c576f43838453f84f5a668ea26b8e52f704f99b06ea28a88d15377bdc0e7497" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.809768 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.810774 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.810994 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.811085 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:27 crc kubenswrapper[4998]: E1208 18:52:27.812117 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:27 crc kubenswrapper[4998]: I1208 18:52:27.812636 4998 scope.go:117] "RemoveContainer" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" Dec 08 18:52:27 crc kubenswrapper[4998]: E1208 18:52:27.814283 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:27 crc kubenswrapper[4998]: E1208 18:52:27.824030 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522b07f6e685\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:27.814179486 +0000 UTC m=+51.462222206,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:27 crc kubenswrapper[4998]: E1208 18:52:27.881214 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.228557 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.367302 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.369770 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.369889 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.369909 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.369968 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:28 crc kubenswrapper[4998]: E1208 18:52:28.381568 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:28 crc kubenswrapper[4998]: E1208 18:52:28.513370 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Dec 08 18:52:28 crc kubenswrapper[4998]: I1208 18:52:28.815648 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:28 crc kubenswrapper[4998]: E1208 18:52:28.840169 4998 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Dec 08 18:52:29 crc kubenswrapper[4998]: I1208 18:52:29.229575 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:30 crc kubenswrapper[4998]: I1208 18:52:30.227207 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:31 crc kubenswrapper[4998]: I1208 18:52:31.224351 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.226847 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.912478 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.913506 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.915277 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.915416 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.915509 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:32 crc kubenswrapper[4998]: E1208 18:52:32.916098 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:32 crc kubenswrapper[4998]: I1208 18:52:32.916493 4998 scope.go:117] "RemoveContainer" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" Dec 08 18:52:32 crc kubenswrapper[4998]: E1208 18:52:32.916938 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:32 crc kubenswrapper[4998]: E1208 18:52:32.925561 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522b07f6e685\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:32.916868539 +0000 UTC m=+56.564911229,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:33 crc kubenswrapper[4998]: I1208 18:52:33.226855 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:34 crc kubenswrapper[4998]: I1208 18:52:34.229643 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:34 crc kubenswrapper[4998]: E1208 18:52:34.890062 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.227014 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.381747 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.383430 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.383464 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.383474 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.383493 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:35 crc kubenswrapper[4998]: E1208 18:52:35.395749 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.800645 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.801093 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.802318 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.802393 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.802419 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:35 crc kubenswrapper[4998]: E1208 18:52:35.803167 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:35 crc kubenswrapper[4998]: I1208 18:52:35.803582 4998 scope.go:117] "RemoveContainer" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" Dec 08 18:52:35 crc kubenswrapper[4998]: E1208 18:52:35.804027 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:35 crc kubenswrapper[4998]: E1208 18:52:35.812013 4998 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.187f522b07f6e685\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.187f522b07f6e685 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:52:03.723216517 +0000 UTC m=+27.371259207,LastTimestamp:2025-12-08 18:52:35.803955682 +0000 UTC m=+59.451998412,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Dec 08 18:52:36 crc kubenswrapper[4998]: I1208 18:52:36.228749 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:37 crc kubenswrapper[4998]: I1208 18:52:37.226102 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:37 crc kubenswrapper[4998]: E1208 18:52:37.419564 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:38 crc kubenswrapper[4998]: I1208 18:52:38.227705 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:39 crc kubenswrapper[4998]: I1208 18:52:39.227280 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:40 crc kubenswrapper[4998]: I1208 18:52:40.227067 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:41 crc kubenswrapper[4998]: I1208 18:52:41.226404 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:41 crc kubenswrapper[4998]: E1208 18:52:41.897881 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.227447 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.396549 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.398201 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.398256 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.398265 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:42 crc kubenswrapper[4998]: I1208 18:52:42.398296 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:42 crc kubenswrapper[4998]: E1208 18:52:42.408382 4998 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Dec 08 18:52:43 crc kubenswrapper[4998]: I1208 18:52:43.226382 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:44 crc kubenswrapper[4998]: I1208 18:52:44.226316 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:45 crc kubenswrapper[4998]: I1208 18:52:45.225918 4998 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Dec 08 18:52:45 crc kubenswrapper[4998]: I1208 18:52:45.453127 4998 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-5hqq8" Dec 08 18:52:45 crc kubenswrapper[4998]: I1208 18:52:45.458477 4998 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-5hqq8" Dec 08 18:52:45 crc kubenswrapper[4998]: I1208 18:52:45.492130 4998 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Dec 08 18:52:45 crc kubenswrapper[4998]: I1208 18:52:45.743259 4998 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 08 18:52:46 crc kubenswrapper[4998]: I1208 18:52:46.461492 4998 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-01-07 18:47:45 +0000 UTC" deadline="2025-12-31 09:11:12.840485037 +0000 UTC" Dec 08 18:52:46 crc kubenswrapper[4998]: I1208 18:52:46.462131 4998 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="542h18m26.378360336s" Dec 08 18:52:47 crc kubenswrapper[4998]: E1208 18:52:47.425609 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.366255 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.368842 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.368938 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.368955 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[4998]: E1208 18:52:48.370065 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.370637 4998 scope.go:117] "RemoveContainer" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.892259 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.894290 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a"} Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.894512 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.895070 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.895107 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:48 crc kubenswrapper[4998]: I1208 18:52:48.895117 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:48 crc kubenswrapper[4998]: E1208 18:52:48.895597 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.409115 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.410373 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.410454 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.410490 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.410654 4998 kubelet_node_status.go:78] "Attempting to register node" node="crc" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.427336 4998 kubelet_node_status.go:127] "Node was previously registered" node="crc" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.427787 4998 kubelet_node_status.go:81] "Successfully registered node" node="crc" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.427823 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.432534 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.432590 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.432602 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.432617 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.432628 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.453966 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.462577 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.462613 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.462623 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.462636 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.462645 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.476524 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.485068 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.485103 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.485113 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.485126 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.485136 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.497558 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.505651 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.505712 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.505727 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.505743 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.506032 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:49Z","lastTransitionTime":"2025-12-08T18:52:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.518058 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.518179 4998 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.518205 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.618395 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.718766 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.819420 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.898944 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.899845 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.901792 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" exitCode=255 Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.901857 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a"} Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.901908 4998 scope.go:117] "RemoveContainer" containerID="acc7c125a91ead6fa00a88844f69578fafcacf4d313790cd0fe00e9ffafb02df" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.902286 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.903065 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.903104 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.903114 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.903890 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:49 crc kubenswrapper[4998]: I1208 18:52:49.904256 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.904551 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:49 crc kubenswrapper[4998]: E1208 18:52:49.932825 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.033116 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.133449 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.234080 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.334857 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.435602 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.535741 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.636762 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.737369 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.837740 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:50 crc kubenswrapper[4998]: I1208 18:52:50.907199 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:52:50 crc kubenswrapper[4998]: E1208 18:52:50.937887 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.038280 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.138411 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.239210 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.339567 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.440751 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.542859 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.643635 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.743982 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.844964 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:51 crc kubenswrapper[4998]: E1208 18:52:51.945846 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.046674 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.147273 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.247452 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.348081 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.448998 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.549181 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.650339 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.750974 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.851536 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.911963 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.912423 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.913269 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.913319 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.913330 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.913836 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:52 crc kubenswrapper[4998]: I1208 18:52:52.914117 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.914343 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:52 crc kubenswrapper[4998]: E1208 18:52:52.952797 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.053074 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.153861 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.254760 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.354884 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.455149 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.555631 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.656319 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.756486 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.857434 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:53 crc kubenswrapper[4998]: E1208 18:52:53.957824 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.059029 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.160168 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.260945 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.361430 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.462306 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.562571 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.663323 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.763649 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.864356 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:54 crc kubenswrapper[4998]: E1208 18:52:54.965302 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.066785 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.167609 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.268466 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.369567 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.470481 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.571484 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.671869 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.772452 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.873048 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:55 crc kubenswrapper[4998]: E1208 18:52:55.973489 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.074295 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.174796 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.275533 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.375995 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.476411 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.577606 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.678741 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.779453 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.880524 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:56 crc kubenswrapper[4998]: E1208 18:52:56.980883 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.081402 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: I1208 18:52:57.156663 4998 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.182423 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.282562 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.383225 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: I1208 18:52:57.384661 4998 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.426732 4998 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.484198 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.584856 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.685567 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.786480 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.887095 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:57 crc kubenswrapper[4998]: E1208 18:52:57.987658 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.088548 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.189144 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.289847 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.390831 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.491595 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.592141 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.693016 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.793463 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.894564 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.894934 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.895745 4998 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.897920 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.898003 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.898066 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.899536 4998 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Dec 08 18:52:58 crc kubenswrapper[4998]: I1208 18:52:58.900053 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.900497 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:52:58 crc kubenswrapper[4998]: E1208 18:52:58.995235 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.096221 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.197206 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.297638 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.398944 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.499767 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.519384 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.527103 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.527442 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.527604 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.527899 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.528091 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:59Z","lastTransitionTime":"2025-12-08T18:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.548350 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.554747 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.554811 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.554833 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.554860 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.554932 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:59Z","lastTransitionTime":"2025-12-08T18:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.576382 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.582175 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.582289 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.582310 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.582339 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.582361 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:59Z","lastTransitionTime":"2025-12-08T18:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.599961 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.604987 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.605061 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.605081 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.605164 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:52:59 crc kubenswrapper[4998]: I1208 18:52:59.605185 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:52:59Z","lastTransitionTime":"2025-12-08T18:52:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.624397 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:59Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.624673 4998 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.624763 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.724838 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.825888 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:52:59 crc kubenswrapper[4998]: E1208 18:52:59.926321 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.027496 4998 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.087672 4998 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.130429 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.130476 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.130487 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.130504 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.130517 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.142038 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.170459 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.233395 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.233429 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.233439 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.233452 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.233461 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.267255 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.335834 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.335879 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.335889 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.335905 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.335918 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.367341 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.439073 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.439185 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.439242 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.439271 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.439293 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.467570 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.542244 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.542298 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.542311 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.542327 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.542337 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.645563 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.645711 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.645736 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.645776 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.645797 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.749554 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.749628 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.749651 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.749677 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.749720 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.837659 4998 apiserver.go:52] "Watching apiserver" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.850783 4998 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852322 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852381 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852398 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852419 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852390 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-wjfn5","openshift-kube-apiserver/kube-apiserver-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-ovn-kubernetes/ovnkube-node-h7zr9","openshift-etcd/etcd-crc","openshift-image-registry/node-ca-lmsm8","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-network-operator/iptables-alerter-5jnd7","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr","openshift-machine-config-operator/machine-config-daemon-gwq5q","openshift-multus/multus-72nfz","openshift-multus/multus-additional-cni-plugins-9kdnj","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/network-metrics-daemon-z9wmf"] Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.852434 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.854153 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.855716 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.856145 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.857101 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.857172 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.858072 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.860095 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.860783 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.861045 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.861841 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.863470 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.863782 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.864932 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.865206 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.881179 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.881186 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.882057 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.881140 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.886343 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.886720 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.886858 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.886825 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.887380 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.887632 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.887945 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.891581 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.891936 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.893654 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.891758 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.898152 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.898951 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.901933 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.910740 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.911035 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.911179 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.911381 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.914418 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.919556 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.919616 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.920270 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-72nfz" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.923784 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.925105 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.925197 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.925289 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.928256 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.928546 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.928545 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.930158 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.930333 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.930457 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.931291 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.931647 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.932546 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.933132 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.934177 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.938157 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.944417 4998 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.945355 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.945486 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:53:00 crc kubenswrapper[4998]: E1208 18:53:00.945671 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.949504 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.951587 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.955414 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.955449 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.955460 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.955478 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.955490 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:00Z","lastTransitionTime":"2025-12-08T18:53:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.966481 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.978017 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.989170 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:00 crc kubenswrapper[4998]: I1208 18:53:00.998089 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.008205 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011335 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011374 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011395 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011416 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011430 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011445 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.011465 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012204 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012273 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012351 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012418 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012439 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012484 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012507 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012537 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012534 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012542 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012559 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012581 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012605 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012629 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012621 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012653 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012676 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012752 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012770 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012789 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012835 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012868 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012889 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012904 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012935 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012954 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012968 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012982 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.012998 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013005 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013015 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013111 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013138 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013158 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013174 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013205 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013225 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013248 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013265 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013277 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013284 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013331 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013367 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013396 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013418 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013441 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013464 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013486 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013510 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013535 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013560 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013758 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013837 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013869 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013976 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.013981 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014201 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014223 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014435 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014495 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014521 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014519 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014837 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.014875 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015124 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015397 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015515 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015532 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015824 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015858 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015915 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.015942 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016063 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016134 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016159 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016406 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016440 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016502 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016529 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020765 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020801 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020863 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020889 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020944 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020966 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016510 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.016260 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017080 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017114 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017283 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017302 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017323 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017498 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017750 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017805 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.017970 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.018088 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.018450 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.018727 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.018659 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019027 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019120 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019132 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019277 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019341 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019492 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020006 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.019993 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020049 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020276 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020501 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020513 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021245 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021296 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021450 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.020298 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021862 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021873 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.021987 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.022501 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.022521 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.022708 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.022885 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.023103 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.024075 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.024105 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.024567 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.025529 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.025800 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.026060 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.026082 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.026412 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.027010 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.027665 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.029827 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.031169 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.531094181 +0000 UTC m=+85.179136881 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.030922 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.031570 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.031822 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.032137 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.032763 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.033111 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.033535 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.032791 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.033767 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.033814 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.033843 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034262 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034414 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034415 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034472 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034494 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034515 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034553 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034571 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034596 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034614 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034633 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034960 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.034991 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035020 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035038 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035076 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035100 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035119 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035135 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035152 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035167 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035175 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035211 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035230 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035246 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035265 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035282 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035362 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035383 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035402 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035427 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035408 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035456 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035501 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035516 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035534 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035556 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035573 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035589 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035606 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035621 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035649 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035672 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035736 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035756 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035777 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035794 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035810 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035835 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035850 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035851 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035870 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035889 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035908 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035923 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035946 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035963 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035979 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.035995 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036017 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036033 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036050 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036069 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036086 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036102 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036118 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036133 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036150 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036167 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036168 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036184 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036202 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036226 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036244 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036263 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036284 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036272 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036331 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036349 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037844 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037874 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037912 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037932 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037951 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037987 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038017 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038073 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038093 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038159 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038178 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038194 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038249 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038269 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038322 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038343 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038360 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038407 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038424 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038466 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038508 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038576 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038601 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038674 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038973 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039023 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039043 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039060 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039114 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039135 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039153 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039185 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039202 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039219 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039239 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039272 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039298 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039320 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039353 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039370 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039393 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039456 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039496 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039554 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039592 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039628 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039672 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039742 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039762 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039778 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039816 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039868 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039908 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039945 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040000 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040018 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040041 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040076 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040096 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040114 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040161 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040180 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040198 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040232 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040252 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040278 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040315 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040334 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040355 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040393 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040411 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040428 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040517 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-system-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040556 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040576 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-multus\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040595 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040703 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040730 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040748 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040781 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c186590-6bde-4b05-ac4d-9e6f0e656d17-proxy-tls\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040802 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040817 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-hostroot\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040839 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040872 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040889 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040906 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040939 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-cnibin\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040955 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-daemon-config\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040973 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdw2c\" (UniqueName: \"kubernetes.io/projected/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-kube-api-access-gdw2c\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040991 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brvct\" (UniqueName: \"kubernetes.io/projected/ab88c832-775d-46c6-9167-aa51d0574b17-kube-api-access-brvct\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041025 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041043 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041066 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041100 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-netns\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041115 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041131 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041193 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0a43997-c346-42c7-a485-b2b55c22c9c6-hosts-file\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041230 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26tn9\" (UniqueName: \"kubernetes.io/projected/f0a43997-c346-42c7-a485-b2b55c22c9c6-kube-api-access-26tn9\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041271 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041353 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc4703c-51fa-4a35-ab04-0a6028035fb2-serviceca\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041421 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2qzp\" (UniqueName: \"kubernetes.io/projected/2cc4703c-51fa-4a35-ab04-0a6028035fb2-kube-api-access-n2qzp\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041440 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cnibin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041473 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041525 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041584 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lvgz\" (UniqueName: \"kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041613 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041679 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-k8s-cni-cncf-io\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041748 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041770 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041815 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbwzm\" (UniqueName: \"kubernetes.io/projected/0c186590-6bde-4b05-ac4d-9e6f0e656d17-kube-api-access-mbwzm\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041841 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041858 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041895 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041913 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0c186590-6bde-4b05-ac4d-9e6f0e656d17-rootfs\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041935 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-socket-dir-parent\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041968 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-kubelet\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041986 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-system-cni-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042004 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-os-release\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042052 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cni-binary-copy\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042140 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-conf-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042161 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042180 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042217 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042235 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042255 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042299 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042319 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2w77\" (UniqueName: \"kubernetes.io/projected/085d31f3-c7fb-4aca-903c-9db17e8d0047-kube-api-access-s2w77\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042337 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042376 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042398 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042416 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042433 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0a43997-c346-42c7-a485-b2b55c22c9c6-tmp-dir\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042472 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-binary-copy\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042493 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj679\" (UniqueName: \"kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042510 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042667 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c186590-6bde-4b05-ac4d-9e6f0e656d17-mcd-auth-proxy-config\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042710 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042743 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc4703c-51fa-4a35-ab04-0a6028035fb2-host\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042786 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-os-release\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042806 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-multus-certs\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042828 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-etc-kubernetes\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042877 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042903 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-bin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042921 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042960 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042979 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043002 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043209 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043224 4998 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043235 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043268 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043279 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043289 4998 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043299 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043311 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043323 4998 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043350 4998 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043361 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043371 4998 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043470 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043508 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043528 4998 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043547 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043565 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043581 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043598 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043614 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043629 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043644 4998 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043658 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043744 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043763 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043778 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043794 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043810 4998 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043824 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036324 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.036856 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037447 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037614 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.037774 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.038464 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039256 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039411 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039493 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039829 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.039947 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040235 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040371 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040476 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040557 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.040948 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041029 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041041 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041274 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041306 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.041564 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042112 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042303 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042544 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.042623 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043044 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043128 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043329 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043357 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.043598 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.043993 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.044739 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045116 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045225 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045248 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045271 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045365 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.045813 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.046240 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.046311 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047122 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047367 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047388 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047490 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047821 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047599 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047865 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.048768 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.048851 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.048994 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.049147 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.049236 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.049726 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.049802 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050029 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050143 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050209 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050390 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050570 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.050954 4998 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.051247 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.051593 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.051848 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.052261 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.052875 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053027 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053103 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053158 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053297 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053508 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053540 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.053912 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.054171 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.054366 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.054452 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.055087 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.056434 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.056678 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.056739 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.056858 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.057179 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.057706 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.058322 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.058734 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.058739 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.058884 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.059185 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.059348 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.059519 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.059746 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060074 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060119 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060295 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060461 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.047380 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060630 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060752 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.060923 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061058 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061398 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061535 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.061745 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.061766 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.061777 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061812 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.061842 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.561824818 +0000 UTC m=+85.209867508 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061859 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.061979 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062064 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062072 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062135 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062318 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062328 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062420 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062731 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.062864 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.063138 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.063316 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.063703 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.063802 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.064061 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.064318 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.064437 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.064573 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.064634 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065096 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065238 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.065316 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.565291109 +0000 UTC m=+85.213333799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.065373 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.565365161 +0000 UTC m=+85.213407861 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065514 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065804 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065908 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.065974 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.066118 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.066144 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.066160 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.066193 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.066411 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.067195 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.067296 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.067419 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068147 4998 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068431 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068481 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068550 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068643 4998 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068703 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.068826 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.069080 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.069116 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.069129 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.069147 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.069159 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070120 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070164 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070191 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070220 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070247 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070273 4998 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070304 4998 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070331 4998 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070470 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070536 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070573 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070609 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.070889 4998 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071101 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071128 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071149 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071171 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071193 4998 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071215 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071238 4998 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071279 4998 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071306 4998 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071329 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071351 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071371 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071391 4998 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071413 4998 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071434 4998 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071456 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071491 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071512 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071532 4998 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071552 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071575 4998 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071609 4998 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071719 4998 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071754 4998 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071788 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071798 4998 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071810 4998 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071821 4998 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071831 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071841 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071851 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071860 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.071869 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.072408 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.072702 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.073372 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.074399 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.074623 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.075399 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.075411 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.075976 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.076224 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.083413 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.088606 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.089998 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.090035 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.090051 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.090118 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.590096119 +0000 UTC m=+85.238138799 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.090140 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.091664 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.097098 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.098038 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.104538 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.112492 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.112848 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.114194 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.125861 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.137369 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.147149 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.154489 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.164332 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.170929 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.170976 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.170987 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.171004 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.171015 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172266 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc4703c-51fa-4a35-ab04-0a6028035fb2-serviceca\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172302 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2qzp\" (UniqueName: \"kubernetes.io/projected/2cc4703c-51fa-4a35-ab04-0a6028035fb2-kube-api-access-n2qzp\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172319 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cnibin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172335 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172350 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172369 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9lvgz\" (UniqueName: \"kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172382 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172398 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-k8s-cni-cncf-io\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172434 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-k8s-cni-cncf-io\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172465 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172488 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172494 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cnibin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172652 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172676 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbwzm\" (UniqueName: \"kubernetes.io/projected/0c186590-6bde-4b05-ac4d-9e6f0e656d17-kube-api-access-mbwzm\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172740 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172743 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172812 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.172864 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173206 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0c186590-6bde-4b05-ac4d-9e6f0e656d17-rootfs\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173234 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-socket-dir-parent\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173818 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173416 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0c186590-6bde-4b05-ac4d-9e6f0e656d17-rootfs\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173463 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-socket-dir-parent\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174081 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-kubelet\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174234 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-system-cni-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174369 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-os-release\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174537 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cni-binary-copy\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174657 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-conf-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174771 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174899 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175009 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175128 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175262 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175381 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175356 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-conf-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174803 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-system-cni-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175452 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175390 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s2w77\" (UniqueName: \"kubernetes.io/projected/085d31f3-c7fb-4aca-903c-9db17e8d0047-kube-api-access-s2w77\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175787 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-cni-binary-copy\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.175201 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-os-release\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.174845 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-kubelet\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.173377 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2cc4703c-51fa-4a35-ab04-0a6028035fb2-serviceca\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176024 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176147 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176262 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0a43997-c346-42c7-a485-b2b55c22c9c6-tmp-dir\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176382 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-binary-copy\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176505 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pj679\" (UniqueName: \"kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176663 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176800 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c186590-6bde-4b05-ac4d-9e6f0e656d17-mcd-auth-proxy-config\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.176914 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177035 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc4703c-51fa-4a35-ab04-0a6028035fb2-host\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177154 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-os-release\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177261 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-multus-certs\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177363 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177474 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-etc-kubernetes\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177602 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-bin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177739 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177852 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177929 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177978 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178192 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-system-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178322 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178425 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178438 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-multus\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178676 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178852 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179003 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c186590-6bde-4b05-ac4d-9e6f0e656d17-proxy-tls\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179123 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179237 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-hostroot\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179362 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179538 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179713 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179852 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-cnibin\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179967 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-daemon-config\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180076 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gdw2c\" (UniqueName: \"kubernetes.io/projected/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-kube-api-access-gdw2c\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180192 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brvct\" (UniqueName: \"kubernetes.io/projected/ab88c832-775d-46c6-9167-aa51d0574b17-kube-api-access-brvct\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180326 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180438 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180555 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180720 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-netns\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180864 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.180979 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181092 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0a43997-c346-42c7-a485-b2b55c22c9c6-hosts-file\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181215 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-26tn9\" (UniqueName: \"kubernetes.io/projected/f0a43997-c346-42c7-a485-b2b55c22c9c6-kube-api-access-26tn9\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181428 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181533 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181627 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181773 4998 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.181907 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182046 4998 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182142 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182244 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182372 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182498 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182594 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182696 4998 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182807 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.182906 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183004 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183137 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183245 4998 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183351 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183486 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183616 4998 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183750 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183846 4998 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.183937 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184072 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184177 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184274 4998 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184370 4998 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184462 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184558 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184699 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184811 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.184951 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185050 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185142 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185233 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185336 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185465 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185563 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185661 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185773 4998 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185873 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.185975 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186074 4998 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186171 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186264 4998 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186355 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186445 4998 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186543 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186638 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186735 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186856 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186974 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186922 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-system-cni-dir\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186974 4998 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187038 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187054 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187070 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187083 4998 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187095 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187133 4998 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187146 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187159 4998 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187171 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187183 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187196 4998 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187210 4998 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187223 4998 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187237 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187249 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187261 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187274 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187287 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187299 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187311 4998 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187323 4998 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187335 4998 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187348 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187361 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187373 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187385 4998 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187396 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187408 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187423 4998 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187436 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187458 4998 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187471 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187483 4998 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187494 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187538 4998 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187551 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187562 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187574 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187567 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187586 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187616 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-multus\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.178552 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187644 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187657 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187678 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187726 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-cnibin\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187761 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2cc4703c-51fa-4a35-ab04-0a6028035fb2-host\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187838 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-os-release\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187868 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-multus-certs\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187885 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.187963 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.188021 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:01.6880022 +0000 UTC m=+85.336044970 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188058 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-run-netns\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188092 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-etc-kubernetes\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188095 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.177481 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188130 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-host-var-lib-cni-bin\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187665 4998 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188224 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/085d31f3-c7fb-4aca-903c-9db17e8d0047-cni-binary-copy\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188239 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188251 4998 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188260 4998 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188270 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188279 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188288 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188297 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188307 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188316 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188334 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188344 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188365 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188378 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188388 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188398 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188408 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188411 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-multus-daemon-config\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188419 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188458 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188470 4998 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188486 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188502 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188514 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188528 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188540 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188553 4998 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188566 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188574 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/085d31f3-c7fb-4aca-903c-9db17e8d0047-tuning-conf-dir\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.179257 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188579 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186780 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188613 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188628 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188631 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.187963 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f0a43997-c346-42c7-a485-b2b55c22c9c6-tmp-dir\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188641 4998 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188656 4998 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188668 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188676 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/f0a43997-c346-42c7-a485-b2b55c22c9c6-hosts-file\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188679 4998 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188712 4998 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188722 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188730 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188740 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188750 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188761 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188770 4998 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188779 4998 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188797 4998 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188806 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188816 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188824 4998 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188835 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188844 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188852 4998 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188861 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188869 4998 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188728 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-hostroot\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.186812 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.188969 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.190044 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0c186590-6bde-4b05-ac4d-9e6f0e656d17-mcd-auth-proxy-config\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.191197 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.191713 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.192455 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.192441 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.195596 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj679\" (UniqueName: \"kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679\") pod \"ovnkube-control-plane-57b78d8988-ql7xr\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.198445 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2qzp\" (UniqueName: \"kubernetes.io/projected/2cc4703c-51fa-4a35-ab04-0a6028035fb2-kube-api-access-n2qzp\") pod \"node-ca-lmsm8\" (UID: \"2cc4703c-51fa-4a35-ab04-0a6028035fb2\") " pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.199047 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2w77\" (UniqueName: \"kubernetes.io/projected/085d31f3-c7fb-4aca-903c-9db17e8d0047-kube-api-access-s2w77\") pod \"multus-additional-cni-plugins-9kdnj\" (UID: \"085d31f3-c7fb-4aca-903c-9db17e8d0047\") " pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.204989 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbwzm\" (UniqueName: \"kubernetes.io/projected/0c186590-6bde-4b05-ac4d-9e6f0e656d17-kube-api-access-mbwzm\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.205904 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lvgz\" (UniqueName: \"kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.206258 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0c186590-6bde-4b05-ac4d-9e6f0e656d17-proxy-tls\") pod \"machine-config-daemon-gwq5q\" (UID: \"0c186590-6bde-4b05-ac4d-9e6f0e656d17\") " pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.209196 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brvct\" (UniqueName: \"kubernetes.io/projected/ab88c832-775d-46c6-9167-aa51d0574b17-kube-api-access-brvct\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.209982 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert\") pod \"ovnkube-node-h7zr9\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.210853 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-26tn9\" (UniqueName: \"kubernetes.io/projected/f0a43997-c346-42c7-a485-b2b55c22c9c6-kube-api-access-26tn9\") pod \"node-resolver-wjfn5\" (UID: \"f0a43997-c346-42c7-a485-b2b55c22c9c6\") " pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.211708 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gdw2c\" (UniqueName: \"kubernetes.io/projected/88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa-kube-api-access-gdw2c\") pod \"multus-72nfz\" (UID: \"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\") " pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.214719 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.215708 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.216616 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.219655 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.224990 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.230560 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc4541ce_7789_4670_bc75_5c2868e52ce0.slice/crio-a8cf3f9200d818f79de2f1fe543b32fabfcc2b83810ed0139be11e74dc523b6a WatchSource:0}: Error finding container a8cf3f9200d818f79de2f1fe543b32fabfcc2b83810ed0139be11e74dc523b6a: Status 404 returned error can't find the container with id a8cf3f9200d818f79de2f1fe543b32fabfcc2b83810ed0139be11e74dc523b6a Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.231881 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.234029 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 18:53:01 crc kubenswrapper[4998]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 18:53:01 crc kubenswrapper[4998]: ho_enable="--enable-hybrid-overlay" Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 18:53:01 crc kubenswrapper[4998]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 18:53:01 crc kubenswrapper[4998]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-host=127.0.0.1 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-port=9743 \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ho_enable} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-interconnect \ Dec 08 18:53:01 crc kubenswrapper[4998]: --disable-approver \ Dec 08 18:53:01 crc kubenswrapper[4998]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --wait-for-kubernetes-api=200s \ Dec 08 18:53:01 crc kubenswrapper[4998]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.235676 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.241363 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --disable-webhook \ Dec 08 18:53:01 crc kubenswrapper[4998]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.243499 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.248660 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c186590_6bde_4b05_ac4d_9e6f0e656d17.slice/crio-f440544c11708d4410c1d2b5803f6929aa98ae3ffd9a3e4e0e0d215748ccd28b WatchSource:0}: Error finding container f440544c11708d4410c1d2b5803f6929aa98ae3ffd9a3e4e0e0d215748ccd28b: Status 404 returned error can't find the container with id f440544c11708d4410c1d2b5803f6929aa98ae3ffd9a3e4e0e0d215748ccd28b Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.248680 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.249235 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.251652 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbwzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-gwq5q_openshift-machine-config-operator(0c186590-6bde-4b05-ac4d-9e6f0e656d17): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.254844 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbwzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-gwq5q_openshift-machine-config-operator(0c186590-6bde-4b05-ac4d-9e6f0e656d17): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.256977 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.260121 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.262221 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lmsm8" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.269809 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.269994 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.273210 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.273248 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.273262 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.273279 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.273292 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.275749 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.277046 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.278644 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-wjfn5" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.278912 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.281207 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 18:53:01 crc kubenswrapper[4998]: while [ true ]; Dec 08 18:53:01 crc kubenswrapper[4998]: do Dec 08 18:53:01 crc kubenswrapper[4998]: for f in $(ls /tmp/serviceca); do Dec 08 18:53:01 crc kubenswrapper[4998]: echo $f Dec 08 18:53:01 crc kubenswrapper[4998]: ca_file_path="/tmp/serviceca/${f}" Dec 08 18:53:01 crc kubenswrapper[4998]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 18:53:01 crc kubenswrapper[4998]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 18:53:01 crc kubenswrapper[4998]: if [ -e "${reg_dir_path}" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: mkdir $reg_dir_path Dec 08 18:53:01 crc kubenswrapper[4998]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: for d in $(ls /etc/docker/certs.d); do Dec 08 18:53:01 crc kubenswrapper[4998]: echo $d Dec 08 18:53:01 crc kubenswrapper[4998]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 18:53:01 crc kubenswrapper[4998]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 18:53:01 crc kubenswrapper[4998]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: rm -rf /etc/docker/certs.d/$d Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait ${!} Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2qzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-lmsm8_openshift-image-registry(2cc4703c-51fa-4a35-ab04-0a6028035fb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.282533 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-lmsm8" podUID="2cc4703c-51fa-4a35-ab04-0a6028035fb2" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.286697 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-72nfz" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.292152 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.298099 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.300154 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.301059 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfc7150c6_b180_4712_a5ed_6b25328d0118.slice/crio-2bb7de2650f3dfcf05791935a887950f2a4579e0ca79457182f81ee6ff637412 WatchSource:0}: Error finding container 2bb7de2650f3dfcf05791935a887950f2a4579e0ca79457182f81ee6ff637412: Status 404 returned error can't find the container with id 2bb7de2650f3dfcf05791935a887950f2a4579e0ca79457182f81ee6ff637412 Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.302728 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.308929 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.310134 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 18:53:01 crc kubenswrapper[4998]: apiVersion: v1 Dec 08 18:53:01 crc kubenswrapper[4998]: clusters: Dec 08 18:53:01 crc kubenswrapper[4998]: - cluster: Dec 08 18:53:01 crc kubenswrapper[4998]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: server: https://api-int.crc.testing:6443 Dec 08 18:53:01 crc kubenswrapper[4998]: name: default-cluster Dec 08 18:53:01 crc kubenswrapper[4998]: contexts: Dec 08 18:53:01 crc kubenswrapper[4998]: - context: Dec 08 18:53:01 crc kubenswrapper[4998]: cluster: default-cluster Dec 08 18:53:01 crc kubenswrapper[4998]: namespace: default Dec 08 18:53:01 crc kubenswrapper[4998]: user: default-auth Dec 08 18:53:01 crc kubenswrapper[4998]: name: default-context Dec 08 18:53:01 crc kubenswrapper[4998]: current-context: default-context Dec 08 18:53:01 crc kubenswrapper[4998]: kind: Config Dec 08 18:53:01 crc kubenswrapper[4998]: preferences: {} Dec 08 18:53:01 crc kubenswrapper[4998]: users: Dec 08 18:53:01 crc kubenswrapper[4998]: - name: default-auth Dec 08 18:53:01 crc kubenswrapper[4998]: user: Dec 08 18:53:01 crc kubenswrapper[4998]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:53:01 crc kubenswrapper[4998]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:53:01 crc kubenswrapper[4998]: EOF Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lvgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-h7zr9_openshift-ovn-kubernetes(fc7150c6-b180-4712-a5ed-6b25328d0118): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.311331 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.316211 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -uo pipefail Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 18:53:01 crc kubenswrapper[4998]: HOSTS_FILE="/etc/hosts" Dec 08 18:53:01 crc kubenswrapper[4998]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Make a temporary file with the old hosts file's attributes. Dec 08 18:53:01 crc kubenswrapper[4998]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Failed to preserve hosts file. Exiting." Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: while true; do Dec 08 18:53:01 crc kubenswrapper[4998]: declare -A svc_ips Dec 08 18:53:01 crc kubenswrapper[4998]: for svc in "${services[@]}"; do Dec 08 18:53:01 crc kubenswrapper[4998]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 18:53:01 crc kubenswrapper[4998]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 18:53:01 crc kubenswrapper[4998]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 18:53:01 crc kubenswrapper[4998]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 18:53:01 crc kubenswrapper[4998]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 18:53:01 crc kubenswrapper[4998]: for i in ${!cmds[*]} Dec 08 18:53:01 crc kubenswrapper[4998]: do Dec 08 18:53:01 crc kubenswrapper[4998]: ips=($(eval "${cmds[i]}")) Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: svc_ips["${svc}"]="${ips[@]}" Dec 08 18:53:01 crc kubenswrapper[4998]: break Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Update /etc/hosts only if we get valid service IPs Dec 08 18:53:01 crc kubenswrapper[4998]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 18:53:01 crc kubenswrapper[4998]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 18:53:01 crc kubenswrapper[4998]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 18:53:01 crc kubenswrapper[4998]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: continue Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Append resolver entries for services Dec 08 18:53:01 crc kubenswrapper[4998]: rc=0 Dec 08 18:53:01 crc kubenswrapper[4998]: for svc in "${!svc_ips[@]}"; do Dec 08 18:53:01 crc kubenswrapper[4998]: for ip in ${svc_ips[${svc}]}; do Dec 08 18:53:01 crc kubenswrapper[4998]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ $rc -ne 0 ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: continue Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 18:53:01 crc kubenswrapper[4998]: # Replace /etc/hosts with our modified version if needed Dec 08 18:53:01 crc kubenswrapper[4998]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 18:53:01 crc kubenswrapper[4998]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: unset svc_ips Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26tn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wjfn5_openshift-dns(f0a43997-c346-42c7-a485-b2b55c22c9c6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.317961 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wjfn5" podUID="f0a43997-c346-42c7-a485-b2b55c22c9c6" Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.325482 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod085d31f3_c7fb_4aca_903c_9db17e8d0047.slice/crio-4053a12bc6ad99e7a24401facce950b3dc039d0bf752fc206e30d94fdbece277 WatchSource:0}: Error finding container 4053a12bc6ad99e7a24401facce950b3dc039d0bf752fc206e30d94fdbece277: Status 404 returned error can't find the container with id 4053a12bc6ad99e7a24401facce950b3dc039d0bf752fc206e30d94fdbece277 Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.327370 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod88f11a4e_e168_4ddd_bb7b_7eb4ddd4c9aa.slice/crio-20a5e43572ccf85304def183fc31308cd73a321a4eda158228f2cd2777d2d1c6 WatchSource:0}: Error finding container 20a5e43572ccf85304def183fc31308cd73a321a4eda158228f2cd2777d2d1c6: Status 404 returned error can't find the container with id 20a5e43572ccf85304def183fc31308cd73a321a4eda158228f2cd2777d2d1c6 Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.329200 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 18:53:01 crc kubenswrapper[4998]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 18:53:01 crc kubenswrapper[4998]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdw2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-72nfz_openshift-multus(88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.329512 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2w77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9kdnj_openshift-multus(085d31f3-c7fb-4aca-903c-9db17e8d0047): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.330499 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-72nfz" podUID="88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.332266 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" podUID="085d31f3-c7fb-4aca-903c-9db17e8d0047" Dec 08 18:53:01 crc kubenswrapper[4998]: W1208 18:53:01.335797 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8867028_389a_494e_b230_ed29201b63ca.slice/crio-de33d590dbedbf5e0c3d89c87fd218cbbc7fa0e27df0ce3227bda54457ff8636 WatchSource:0}: Error finding container de33d590dbedbf5e0c3d89c87fd218cbbc7fa0e27df0ce3227bda54457ff8636: Status 404 returned error can't find the container with id de33d590dbedbf5e0c3d89c87fd218cbbc7fa0e27df0ce3227bda54457ff8636 Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.340119 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -euo pipefail Dec 08 18:53:01 crc kubenswrapper[4998]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 18:53:01 crc kubenswrapper[4998]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 18:53:01 crc kubenswrapper[4998]: # As the secret mount is optional we must wait for the files to be present. Dec 08 18:53:01 crc kubenswrapper[4998]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 18:53:01 crc kubenswrapper[4998]: TS=$(date +%s) Dec 08 18:53:01 crc kubenswrapper[4998]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 18:53:01 crc kubenswrapper[4998]: HAS_LOGGED_INFO=0 Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: log_missing_certs(){ Dec 08 18:53:01 crc kubenswrapper[4998]: CUR_TS=$(date +%s) Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 18:53:01 crc kubenswrapper[4998]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 18:53:01 crc kubenswrapper[4998]: HAS_LOGGED_INFO=1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: } Dec 08 18:53:01 crc kubenswrapper[4998]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 18:53:01 crc kubenswrapper[4998]: log_missing_certs Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 5 Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/kube-rbac-proxy \ Dec 08 18:53:01 crc kubenswrapper[4998]: --logtostderr \ Dec 08 18:53:01 crc kubenswrapper[4998]: --secure-listen-address=:9108 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --upstream=http://127.0.0.1:29108/ \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-private-key-file=${TLS_PK} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-cert-file=${TLS_CERT} Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj679,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ql7xr_openshift-ovn-kubernetes(b8867028-389a-494e-b230-ed29201b63ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.342924 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # This is needed so that converting clusters from GA to TP Dec 08 18:53:01 crc kubenswrapper[4998]: # will rollout control plane pods as well Dec 08 18:53:01 crc kubenswrapper[4998]: network_segmentation_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" != "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: route_advertisements_enable_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: preconfigured_udn_addresses_enable_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_policy_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 18:53:01 crc kubenswrapper[4998]: admin_network_policy_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: if [ "shared" == "shared" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: gateway_mode_flags="--gateway-mode shared" Dec 08 18:53:01 crc kubenswrapper[4998]: elif [ "shared" == "local" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: gateway_mode_flags="--gateway-mode local" Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-interconnect \ Dec 08 18:53:01 crc kubenswrapper[4998]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-enable-pprof \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-enable-config-duration \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v4_join_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v6_join_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${dns_name_resolver_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${persistent_ips_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${multi_network_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${network_segmentation_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${gateway_mode_flags} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${route_advertisements_enable_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-ip=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-firewall=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-qos=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-service=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-multicast \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-multi-external-gateway=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${multi_network_policy_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${admin_network_policy_enabled_flag} Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj679,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ql7xr_openshift-ovn-kubernetes(b8867028-389a-494e-b230-ed29201b63ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.344597 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" podUID="b8867028-389a-494e-b230-ed29201b63ca" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.370102 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.370988 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.372635 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.374330 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.375560 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.375612 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.375621 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.375646 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.375658 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.376667 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.378373 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.379605 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.381212 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.381759 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.383054 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.383954 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.385895 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.386488 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.388216 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.388657 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.389332 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.390464 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.391667 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.394047 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.395136 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.396267 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.399033 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.400250 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.401220 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.402415 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.403293 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.404834 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.405616 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.408253 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.409258 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.410329 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.411565 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.412742 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.413909 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.414643 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.415519 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.416234 4998 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.416334 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.419811 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.420892 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.422341 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.423606 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.424664 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.425757 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.426956 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.427438 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.428089 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.429470 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.430307 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.431394 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.432219 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.433405 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.434831 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.436118 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.437908 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.438547 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.440482 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.441364 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.477313 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.477354 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.477362 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.477376 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.477384 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.580176 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.580237 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.580257 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.580282 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.580299 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.595670 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.596139 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.596070331 +0000 UTC m=+86.244113041 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.600159 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.600230 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.600269 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.600293 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600397 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600415 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600475 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.600454229 +0000 UTC m=+86.248496919 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600494 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.60048637 +0000 UTC m=+86.248529060 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600517 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600550 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600556 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600567 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600578 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600579 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600636 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.600624413 +0000 UTC m=+86.248667093 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.600652 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.600645744 +0000 UTC m=+86.248688544 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.682801 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.682840 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.682851 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.682870 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.682889 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.701105 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.701274 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.701363 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:02.701345919 +0000 UTC m=+86.349388619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.785168 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.785233 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.785244 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.785262 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.785273 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.886991 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.887183 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.887213 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.887237 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.887250 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.946488 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wjfn5" event={"ID":"f0a43997-c346-42c7-a485-b2b55c22c9c6","Type":"ContainerStarted","Data":"eb8af7f286c9c99f80b6582e5e7943635da644135a646db6db0c9ad7ca568df2"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.948937 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:dns-node-resolver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -uo pipefail Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: trap 'jobs -p | xargs kill || true; wait; exit 0' TERM Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: OPENSHIFT_MARKER="openshift-generated-node-resolver" Dec 08 18:53:01 crc kubenswrapper[4998]: HOSTS_FILE="/etc/hosts" Dec 08 18:53:01 crc kubenswrapper[4998]: TEMP_FILE="/tmp/hosts.tmp" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: IFS=', ' read -r -a services <<< "${SERVICES}" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Make a temporary file with the old hosts file's attributes. Dec 08 18:53:01 crc kubenswrapper[4998]: if ! cp -f --attributes-only "${HOSTS_FILE}" "${TEMP_FILE}"; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Failed to preserve hosts file. Exiting." Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: while true; do Dec 08 18:53:01 crc kubenswrapper[4998]: declare -A svc_ips Dec 08 18:53:01 crc kubenswrapper[4998]: for svc in "${services[@]}"; do Dec 08 18:53:01 crc kubenswrapper[4998]: # Fetch service IP from cluster dns if present. We make several tries Dec 08 18:53:01 crc kubenswrapper[4998]: # to do it: IPv4, IPv6, IPv4 over TCP and IPv6 over TCP. The two last ones Dec 08 18:53:01 crc kubenswrapper[4998]: # are for deployments with Kuryr on older OpenStack (OSP13) - those do not Dec 08 18:53:01 crc kubenswrapper[4998]: # support UDP loadbalancers and require reaching DNS through TCP. Dec 08 18:53:01 crc kubenswrapper[4998]: cmds=('dig -t A @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t AAAA @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t A +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"' Dec 08 18:53:01 crc kubenswrapper[4998]: 'dig -t AAAA +tcp +retry=0 @"${NAMESERVER}" +short "${svc}.${CLUSTER_DOMAIN}"|grep -v "^;"') Dec 08 18:53:01 crc kubenswrapper[4998]: for i in ${!cmds[*]} Dec 08 18:53:01 crc kubenswrapper[4998]: do Dec 08 18:53:01 crc kubenswrapper[4998]: ips=($(eval "${cmds[i]}")) Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "$?" -eq 0 && "${#ips[@]}" -ne 0 ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: svc_ips["${svc}"]="${ips[@]}" Dec 08 18:53:01 crc kubenswrapper[4998]: break Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Update /etc/hosts only if we get valid service IPs Dec 08 18:53:01 crc kubenswrapper[4998]: # We will not update /etc/hosts when there is coredns service outage or api unavailability Dec 08 18:53:01 crc kubenswrapper[4998]: # Stale entries could exist in /etc/hosts if the service is deleted Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -n "${svc_ips[*]-}" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: # Build a new hosts file from /etc/hosts with our custom entries filtered out Dec 08 18:53:01 crc kubenswrapper[4998]: if ! sed --silent "/# ${OPENSHIFT_MARKER}/d; w ${TEMP_FILE}" "${HOSTS_FILE}"; then Dec 08 18:53:01 crc kubenswrapper[4998]: # Only continue rebuilding the hosts entries if its original content is preserved Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: continue Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Append resolver entries for services Dec 08 18:53:01 crc kubenswrapper[4998]: rc=0 Dec 08 18:53:01 crc kubenswrapper[4998]: for svc in "${!svc_ips[@]}"; do Dec 08 18:53:01 crc kubenswrapper[4998]: for ip in ${svc_ips[${svc}]}; do Dec 08 18:53:01 crc kubenswrapper[4998]: echo "${ip} ${svc} ${svc}.${CLUSTER_DOMAIN} # ${OPENSHIFT_MARKER}" >> "${TEMP_FILE}" || rc=$? Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ $rc -ne 0 ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: continue Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # TODO: Update /etc/hosts atomically to avoid any inconsistent behavior Dec 08 18:53:01 crc kubenswrapper[4998]: # Replace /etc/hosts with our modified version if needed Dec 08 18:53:01 crc kubenswrapper[4998]: cmp "${TEMP_FILE}" "${HOSTS_FILE}" || cp -f "${TEMP_FILE}" "${HOSTS_FILE}" Dec 08 18:53:01 crc kubenswrapper[4998]: # TEMP_FILE is not removed to avoid file create/delete and attributes copy churn Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait Dec 08 18:53:01 crc kubenswrapper[4998]: unset svc_ips Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:SERVICES,Value:image-registry.openshift-image-registry.svc,ValueFrom:nil,},EnvVar{Name:NAMESERVER,Value:10.217.4.10,ValueFrom:nil,},EnvVar{Name:CLUSTER_DOMAIN,Value:cluster.local,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{22020096 0} {} 21Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hosts-file,ReadOnly:false,MountPath:/etc/hosts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-26tn9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-resolver-wjfn5_openshift-dns(f0a43997-c346-42c7-a485-b2b55c22c9c6): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.951497 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"dns-node-resolver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-dns/node-resolver-wjfn5" podUID="f0a43997-c346-42c7-a485-b2b55c22c9c6" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.952545 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"2bb7de2650f3dfcf05791935a887950f2a4579e0ca79457182f81ee6ff637412"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.954609 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: init container &Container{Name:kubecfg-setup,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c cat << EOF > /etc/ovn/kubeconfig Dec 08 18:53:01 crc kubenswrapper[4998]: apiVersion: v1 Dec 08 18:53:01 crc kubenswrapper[4998]: clusters: Dec 08 18:53:01 crc kubenswrapper[4998]: - cluster: Dec 08 18:53:01 crc kubenswrapper[4998]: certificate-authority: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: server: https://api-int.crc.testing:6443 Dec 08 18:53:01 crc kubenswrapper[4998]: name: default-cluster Dec 08 18:53:01 crc kubenswrapper[4998]: contexts: Dec 08 18:53:01 crc kubenswrapper[4998]: - context: Dec 08 18:53:01 crc kubenswrapper[4998]: cluster: default-cluster Dec 08 18:53:01 crc kubenswrapper[4998]: namespace: default Dec 08 18:53:01 crc kubenswrapper[4998]: user: default-auth Dec 08 18:53:01 crc kubenswrapper[4998]: name: default-context Dec 08 18:53:01 crc kubenswrapper[4998]: current-context: default-context Dec 08 18:53:01 crc kubenswrapper[4998]: kind: Config Dec 08 18:53:01 crc kubenswrapper[4998]: preferences: {} Dec 08 18:53:01 crc kubenswrapper[4998]: users: Dec 08 18:53:01 crc kubenswrapper[4998]: - name: default-auth Dec 08 18:53:01 crc kubenswrapper[4998]: user: Dec 08 18:53:01 crc kubenswrapper[4998]: client-certificate: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:53:01 crc kubenswrapper[4998]: client-key: /etc/ovn/ovnkube-node-certs/ovnkube-client-current.pem Dec 08 18:53:01 crc kubenswrapper[4998]: EOF Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-openvswitch,ReadOnly:false,MountPath:/etc/ovn/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lvgz,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-node-h7zr9_openshift-ovn-kubernetes(fc7150c6-b180-4712-a5ed-6b25328d0118): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.961626 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"f440544c11708d4410c1d2b5803f6929aa98ae3ffd9a3e4e0e0d215748ccd28b"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.962414 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kubecfg-setup\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.964535 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-72nfz" event={"ID":"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa","Type":"ContainerStarted","Data":"20a5e43572ccf85304def183fc31308cd73a321a4eda158228f2cd2777d2d1c6"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.971232 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerStarted","Data":"de33d590dbedbf5e0c3d89c87fd218cbbc7fa0e27df0ce3227bda54457ff8636"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.972087 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.973807 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:kube-multus,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,Command:[/bin/bash -ec --],Args:[MULTUS_DAEMON_OPT="" Dec 08 18:53:01 crc kubenswrapper[4998]: /entrypoint/cnibincopy.sh; exec /usr/src/multus-cni/bin/multus-daemon $MULTUS_DAEMON_OPT Dec 08 18:53:01 crc kubenswrapper[4998]: ],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/multus-cni/bin/,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:6443,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:api-int.crc.testing,ValueFrom:nil,},EnvVar{Name:MULTUS_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:false,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:system-cni-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-cni-dir,ReadOnly:false,MountPath:/host/run/multus/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-socket-dir-parent,ReadOnly:false,MountPath:/host/run/multus,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-k8s-cni-cncf-io,ReadOnly:false,MountPath:/run/k8s.cni.cncf.io,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-netns,ReadOnly:false,MountPath:/run/netns,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-bin,ReadOnly:false,MountPath:/var/lib/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-cni-multus,ReadOnly:false,MountPath:/var/lib/cni/multus,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-var-lib-kubelet,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:hostroot,ReadOnly:false,MountPath:/hostroot,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-conf-dir,ReadOnly:false,MountPath:/etc/cni/multus/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:multus-daemon-config,ReadOnly:true,MountPath:/etc/cni/net.d/multus.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-run-multus-certs,ReadOnly:false,MountPath:/etc/cni/multus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:etc-kubernetes,ReadOnly:false,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gdw2c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-72nfz_openshift-multus(88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.974912 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lmsm8" event={"ID":"2cc4703c-51fa-4a35-ab04-0a6028035fb2","Type":"ContainerStarted","Data":"3869d02b9f5081ebf28f0b6978bafb130dfa6628f4531909865ed9ba894a9d17"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.975011 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-72nfz" podUID="88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.973528 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:machine-config-daemon,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115,Command:[/usr/bin/machine-config-daemon],Args:[start --payload-version=4.20.1],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:health,HostPort:8798,ContainerPort:8798,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:rootfs,ReadOnly:false,MountPath:/rootfs,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbwzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8798 },Host:127.0.0.1,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:120,TimeoutSeconds:1,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-gwq5q_openshift-machine-config-operator(0c186590-6bde-4b05-ac4d-9e6f0e656d17): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.978285 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[],Args:[--secure-listen-address=0.0.0.0:9001 --config-file=/etc/kube-rbac-proxy/config-file.yaml --tls-cipher-suites=TLS_AES_128_GCM_SHA256,TLS_AES_256_GCM_SHA384,TLS_CHACHA20_POLY1305_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256,TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 --tls-min-version=VersionTLS12 --upstream=http://127.0.0.1:8797 --logtostderr=true --tls-cert-file=/etc/tls/private/tls.crt --tls-private-key-file=/etc/tls/private/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:9001,ContainerPort:9001,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{20 -3} {} 20m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:proxy-tls,ReadOnly:false,MountPath:/etc/tls/private,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:mcd-auth-proxy-config,ReadOnly:false,MountPath:/etc/kube-rbac-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mbwzm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod machine-config-daemon-gwq5q_openshift-machine-config-operator(0c186590-6bde-4b05-ac4d-9e6f0e656d17): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.979187 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:kube-rbac-proxy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -euo pipefail Dec 08 18:53:01 crc kubenswrapper[4998]: TLS_PK=/etc/pki/tls/metrics-cert/tls.key Dec 08 18:53:01 crc kubenswrapper[4998]: TLS_CERT=/etc/pki/tls/metrics-cert/tls.crt Dec 08 18:53:01 crc kubenswrapper[4998]: # As the secret mount is optional we must wait for the files to be present. Dec 08 18:53:01 crc kubenswrapper[4998]: # The service is created in monitor.yaml and this is created in sdn.yaml. Dec 08 18:53:01 crc kubenswrapper[4998]: TS=$(date +%s) Dec 08 18:53:01 crc kubenswrapper[4998]: WARN_TS=$(( ${TS} + $(( 20 * 60)) )) Dec 08 18:53:01 crc kubenswrapper[4998]: HAS_LOGGED_INFO=0 Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: log_missing_certs(){ Dec 08 18:53:01 crc kubenswrapper[4998]: CUR_TS=$(date +%s) Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "${CUR_TS}" -gt "WARN_TS" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) WARN: ovn-control-plane-metrics-cert not mounted after 20 minutes. Dec 08 18:53:01 crc kubenswrapper[4998]: elif [[ "${HAS_LOGGED_INFO}" -eq 0 ]] ; then Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-cert not mounted. Waiting 20 minutes. Dec 08 18:53:01 crc kubenswrapper[4998]: HAS_LOGGED_INFO=1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: } Dec 08 18:53:01 crc kubenswrapper[4998]: while [[ ! -f "${TLS_PK}" || ! -f "${TLS_CERT}" ]] ; do Dec 08 18:53:01 crc kubenswrapper[4998]: log_missing_certs Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 5 Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo $(date -Iseconds) INFO: ovn-control-plane-metrics-certs mounted, starting kube-rbac-proxy Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/kube-rbac-proxy \ Dec 08 18:53:01 crc kubenswrapper[4998]: --logtostderr \ Dec 08 18:53:01 crc kubenswrapper[4998]: --secure-listen-address=:9108 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-cipher-suites=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --upstream=http://127.0.0.1:29108/ \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-private-key-file=${TLS_PK} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --tls-cert-file=${TLS_CERT} Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:9108,ContainerPort:9108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{20971520 0} {} 20Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovn-control-plane-metrics-cert,ReadOnly:true,MountPath:/etc/pki/tls/metrics-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj679,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ql7xr_openshift-ovn-kubernetes(b8867028-389a-494e-b230-ed29201b63ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.979235 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:node-ca,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418,Command:[/bin/sh -c trap 'jobs -p | xargs -r kill; echo shutting down node-ca; exit 0' TERM Dec 08 18:53:01 crc kubenswrapper[4998]: while [ true ]; Dec 08 18:53:01 crc kubenswrapper[4998]: do Dec 08 18:53:01 crc kubenswrapper[4998]: for f in $(ls /tmp/serviceca); do Dec 08 18:53:01 crc kubenswrapper[4998]: echo $f Dec 08 18:53:01 crc kubenswrapper[4998]: ca_file_path="/tmp/serviceca/${f}" Dec 08 18:53:01 crc kubenswrapper[4998]: f=$(echo $f | sed -r 's/(.*)\.\./\1:/') Dec 08 18:53:01 crc kubenswrapper[4998]: reg_dir_path="/etc/docker/certs.d/${f}" Dec 08 18:53:01 crc kubenswrapper[4998]: if [ -e "${reg_dir_path}" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: cp -u $ca_file_path $reg_dir_path/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: mkdir $reg_dir_path Dec 08 18:53:01 crc kubenswrapper[4998]: cp $ca_file_path $reg_dir_path/ca.crt Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: for d in $(ls /etc/docker/certs.d); do Dec 08 18:53:01 crc kubenswrapper[4998]: echo $d Dec 08 18:53:01 crc kubenswrapper[4998]: dp=$(echo $d | sed -r 's/(.*):/\1\.\./') Dec 08 18:53:01 crc kubenswrapper[4998]: reg_conf_path="/tmp/serviceca/${dp}" Dec 08 18:53:01 crc kubenswrapper[4998]: if [ ! -e "${reg_conf_path}" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: rm -rf /etc/docker/certs.d/$d Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: sleep 60 & wait ${!} Dec 08 18:53:01 crc kubenswrapper[4998]: done Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{10485760 0} {} 10Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:serviceca,ReadOnly:false,MountPath:/tmp/serviceca,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host,ReadOnly:false,MountPath:/etc/docker/certs.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2qzp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*1001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod node-ca-lmsm8_openshift-image-registry(2cc4703c-51fa-4a35-ab04-0a6028035fb2): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.979904 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"machine-config-daemon\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.980410 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"node-ca\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-image-registry/node-ca-lmsm8" podUID="2cc4703c-51fa-4a35-ab04-0a6028035fb2" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.982116 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:ovnkube-cluster-manager,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_join_subnet_opt="--gateway-v4-join-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_join_subnet_opt="--gateway-v6-join-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v4_transit_switch_subnet_opt="--cluster-manager-v4-transit-switch-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "" != "" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: ovn_v6_transit_switch_subnet_opt="--cluster-manager-v6-transit-switch-subnet " Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: dns_name_resolver_enabled_flag="--enable-dns-name-resolver" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: persistent_ips_enabled_flag="--enable-persistent-ips" Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # This is needed so that converting clusters from GA to TP Dec 08 18:53:01 crc kubenswrapper[4998]: # will rollout control plane pods as well Dec 08 18:53:01 crc kubenswrapper[4998]: network_segmentation_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" != "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_enabled_flag="--enable-multi-network" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: network_segmentation_enabled_flag="--enable-network-segmentation" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: route_advertisements_enable_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: route_advertisements_enable_flag="--enable-route-advertisements" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: preconfigured_udn_addresses_enable_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: preconfigured_udn_addresses_enable_flag="--enable-preconfigured-udn-addresses" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Enable multi-network policy if configured (control-plane always full mode) Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_policy_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "false" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: multi_network_policy_enabled_flag="--enable-multi-networkpolicy" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: # Enable admin network policy if configured (control-plane always full mode) Dec 08 18:53:01 crc kubenswrapper[4998]: admin_network_policy_enabled_flag= Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ "true" == "true" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: admin_network_policy_enabled_flag="--enable-admin-network-policy" Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: if [ "shared" == "shared" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: gateway_mode_flags="--gateway-mode shared" Dec 08 18:53:01 crc kubenswrapper[4998]: elif [ "shared" == "local" ]; then Dec 08 18:53:01 crc kubenswrapper[4998]: gateway_mode_flags="--gateway-mode local" Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Invalid OVN_GATEWAY_MODE: \"shared\". Must be \"local\" or \"shared\"." Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - ovnkube-control-plane - start ovnkube --init-cluster-manager ${K8S_NODE}" Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-interconnect \ Dec 08 18:53:01 crc kubenswrapper[4998]: --init-cluster-manager "${K8S_NODE}" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --config-file=/run/ovnkube-config/ovnkube.conf \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel "${OVN_KUBE_LOG_LEVEL}" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-bind-address "127.0.0.1:29108" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-enable-pprof \ Dec 08 18:53:01 crc kubenswrapper[4998]: --metrics-enable-config-duration \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v4_join_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v6_join_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v4_transit_switch_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ovn_v6_transit_switch_subnet_opt} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${dns_name_resolver_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${persistent_ips_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${multi_network_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${network_segmentation_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${gateway_mode_flags} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${route_advertisements_enable_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${preconfigured_udn_addresses_enable_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-ip=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-firewall=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-qos=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-egress-service=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-multicast \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-multi-external-gateway=true \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${multi_network_policy_enabled_flag} \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${admin_network_policy_enabled_flag} Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics-port,HostPort:29108,ContainerPort:29108,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OVN_KUBE_LOG_LEVEL,Value:4,ValueFrom:nil,},EnvVar{Name:K8S_NODE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{314572800 0} {} 300Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:ovnkube-config,ReadOnly:false,MountPath:/run/ovnkube-config/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pj679,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovnkube-control-plane-57b78d8988-ql7xr_openshift-ovn-kubernetes(b8867028-389a-494e-b230-ed29201b63ca): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.983973 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"5cb67db78fe850ae4cbfba15707633498c7c3980c91bdda89ca806b839c69d3a"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.984070 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"kube-rbac-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"ovnkube-cluster-manager\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" podUID="b8867028-389a-494e-b230-ed29201b63ca" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.985563 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"a8cf3f9200d818f79de2f1fe543b32fabfcc2b83810ed0139be11e74dc523b6a"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.986515 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:iptables-alerter,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,Command:[/iptables-alerter/iptables-alerter.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONTAINER_RUNTIME_ENDPOINT,Value:unix:///run/crio/crio.sock,ValueFrom:nil,},EnvVar{Name:ALERTER_POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{68157440 0} {} 65Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:iptables-alerter-script,ReadOnly:false,MountPath:/iptables-alerter,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:host-slash,ReadOnly:true,MountPath:/host,SubPath:,MountPropagation:*HostToContainer,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dsgwk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod iptables-alerter-5jnd7_openshift-network-operator(428b39f5-eb1c-4f65-b7a4-eeb6e84860cc): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.987575 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:webhook,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: # OVN-K will try to remove hybrid overlay node annotations even when the hybrid overlay is not enabled. Dec 08 18:53:01 crc kubenswrapper[4998]: # https://github.com/ovn-org/ovn-kubernetes/blob/ac6820df0b338a246f10f412cd5ec903bd234694/go-controller/pkg/ovn/master.go#L791 Dec 08 18:53:01 crc kubenswrapper[4998]: ho_enable="--enable-hybrid-overlay" Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start webhook" Dec 08 18:53:01 crc kubenswrapper[4998]: # extra-allowed-user: service account `ovn-kubernetes-control-plane` Dec 08 18:53:01 crc kubenswrapper[4998]: # sets pod annotations in multi-homing layer3 network controller (cluster-manager) Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-cert-dir="/etc/webhook-cert" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-host=127.0.0.1 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --webhook-port=9743 \ Dec 08 18:53:01 crc kubenswrapper[4998]: ${ho_enable} \ Dec 08 18:53:01 crc kubenswrapper[4998]: --enable-interconnect \ Dec 08 18:53:01 crc kubenswrapper[4998]: --disable-approver \ Dec 08 18:53:01 crc kubenswrapper[4998]: --extra-allowed-user="system:serviceaccount:openshift-ovn-kubernetes:ovn-kubernetes-control-plane" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --wait-for-kubernetes-api=200s \ Dec 08 18:53:01 crc kubenswrapper[4998]: --pod-admission-conditions="/var/run/ovnkube-identity-config/additional-pod-admission-cond.json" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:KUBERNETES_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/etc/webhook-cert/,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.987726 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"iptables-alerter\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/iptables-alerter-5jnd7" podUID="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988135 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"264e6c301f264d3776b396f9266e271c2f82f20d51f6468a487b328c31a37c7a"} Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988680 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988743 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988754 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988768 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.988778 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:01Z","lastTransitionTime":"2025-12-08T18:53:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.990204 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:approver,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,Command:[/bin/bash -c set -xe Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f "/env/_master" ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: source "/env/_master" Dec 08 18:53:01 crc kubenswrapper[4998]: set +o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: Dec 08 18:53:01 crc kubenswrapper[4998]: echo "I$(date "+%m%d %H:%M:%S.%N") - network-node-identity - start approver" Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/ovnkube-identity --k8s-apiserver=https://api-int.crc.testing:6443 \ Dec 08 18:53:01 crc kubenswrapper[4998]: --disable-webhook \ Dec 08 18:53:01 crc kubenswrapper[4998]: --csr-acceptance-conditions="/var/run/ovnkube-identity-config/additional-cert-acceptance-cond.json" \ Dec 08 18:53:01 crc kubenswrapper[4998]: --loglevel="${LOGLEVEL}" Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOGLEVEL,Value:4,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:env-overrides,ReadOnly:false,MountPath:/env,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ovnkube-identity-cm,ReadOnly:false,MountPath:/var/run/ovnkube-identity-config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8nt2j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000500000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-node-identity-dgvkt_openshift-network-node-identity(fc4541ce-7789-4670-bc75-5c2868e52ce0): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.990213 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerStarted","Data":"4053a12bc6ad99e7a24401facce950b3dc039d0bf752fc206e30d94fdbece277"} Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.990817 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:01 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:01 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:01 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 18:53:01 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Dec 08 18:53:01 crc kubenswrapper[4998]: else Dec 08 18:53:01 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 18:53:01 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:01 crc kubenswrapper[4998]: fi Dec 08 18:53:01 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 18:53:01 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:01 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.991922 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.991957 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"webhook\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\", failed to \"StartContainer\" for \"approver\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"]" pod="openshift-network-node-identity/network-node-identity-dgvkt" podUID="fc4541ce-7789-4670-bc75-5c2868e52ce0" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.992306 4998 kuberuntime_manager.go:1358] "Unhandled Error" err="init container &Container{Name:egress-router-binary-copy,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,Command:[/entrypoint/cnibincopy.sh],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:RHEL8_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel8/bin/,ValueFrom:nil,},EnvVar{Name:RHEL9_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/rhel9/bin/,ValueFrom:nil,},EnvVar{Name:DEFAULT_SOURCE_DIRECTORY,Value:/usr/src/egress-router-cni/bin/,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-binary-copy,ReadOnly:false,MountPath:/entrypoint,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cnibin,ReadOnly:false,MountPath:/host/opt/cni/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:os-release,ReadOnly:true,MountPath:/host/etc/os-release,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s2w77,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod multus-additional-cni-plugins-9kdnj_openshift-multus(085d31f3-c7fb-4aca-903c-9db17e8d0047): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars" logger="UnhandledError" Dec 08 18:53:01 crc kubenswrapper[4998]: E1208 18:53:01.993517 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"egress-router-binary-copy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" podUID="085d31f3-c7fb-4aca-903c-9db17e8d0047" Dec 08 18:53:01 crc kubenswrapper[4998]: I1208 18:53:01.996247 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.009323 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.029061 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.041252 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.052078 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.061744 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.072348 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.081369 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.090642 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.092778 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.093090 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.093170 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.093258 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.093355 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.099433 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.109779 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.117143 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.123494 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.141570 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.150096 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.159470 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.172792 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.184388 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.195841 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.195939 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.195954 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.195972 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.195984 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.201242 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.212288 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.221072 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.234643 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.243556 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.259760 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.271971 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.284291 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.295218 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.297481 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.297528 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.297541 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.297558 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.297570 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.307240 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.316849 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.326167 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.334080 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.365519 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.365544 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.365629 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.365976 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.366290 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.366411 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.366523 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.400791 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.400843 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.400857 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.400878 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.400890 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.407359 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.446887 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.495829 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.503549 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.503623 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.503638 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.503656 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.503669 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.527644 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.572035 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.605893 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.605998 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.606030 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.606052 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.606062 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.613437 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.613630 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.613749 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.613669067 +0000 UTC m=+88.261711777 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.613785 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.613809 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.613829 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.613873 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.613860152 +0000 UTC m=+88.261902832 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.613930 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.613987 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.614015 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614210 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614347 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.614327146 +0000 UTC m=+88.262369836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614212 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614592 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614217 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614838 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.614825498 +0000 UTC m=+88.262868188 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614667 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.614890 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.61487988 +0000 UTC m=+88.262922570 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.707994 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.708200 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.708260 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.708355 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.708412 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.715309 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.715421 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: E1208 18:53:02.715627 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:04.715611316 +0000 UTC m=+88.363654006 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.810034 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.810107 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.810126 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.810154 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.810171 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.913040 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.913324 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.913453 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.913556 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:02 crc kubenswrapper[4998]: I1208 18:53:02.913678 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:02Z","lastTransitionTime":"2025-12-08T18:53:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.016786 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.017065 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.017205 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.017312 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.017425 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.125976 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.126033 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.126061 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.126085 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.126102 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.228552 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.228644 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.228667 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.228737 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.228758 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.331099 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.331156 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.331168 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.331186 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.331200 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.365536 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:03 crc kubenswrapper[4998]: E1208 18:53:03.365756 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.433466 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.433535 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.433555 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.433675 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.433840 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.537040 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.537136 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.537165 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.537198 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.537223 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.639838 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.639912 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.639930 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.639953 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.639968 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.742548 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.742610 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.742630 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.742658 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.742676 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.844773 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.844820 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.844834 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.844852 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.844865 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.947076 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.947121 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.947132 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.947157 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:03 crc kubenswrapper[4998]: I1208 18:53:03.947180 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:03Z","lastTransitionTime":"2025-12-08T18:53:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.050347 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.050417 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.050436 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.050467 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.050486 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.152814 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.153134 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.153317 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.153461 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.153586 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.256739 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.256992 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.257056 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.257122 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.257187 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.360006 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.360094 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.360113 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.360513 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.360758 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.365306 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.365469 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.365312 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.365594 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.365787 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.366002 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.464320 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.464387 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.464406 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.464430 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.464448 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.567427 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.567800 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.567941 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.568204 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.568346 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.644853 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.645001 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645096 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.645057518 +0000 UTC m=+92.293100248 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.645195 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.645257 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.645361 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645446 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645475 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645493 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645574 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.645552971 +0000 UTC m=+92.293595701 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645584 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645630 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645655 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645790 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.645759337 +0000 UTC m=+92.293802077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.645818 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.646017 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.645969943 +0000 UTC m=+92.294012673 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.646367 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.646471 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.646447505 +0000 UTC m=+92.294490235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.670993 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.671071 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.671096 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.671128 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.671154 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.746184 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.746368 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: E1208 18:53:04.746459 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:08.746434861 +0000 UTC m=+92.394477581 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.774009 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.774085 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.774107 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.774159 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.774179 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.877209 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.877270 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.877288 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.877329 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.877351 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.979338 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.979418 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.979457 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.979476 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:04 crc kubenswrapper[4998]: I1208 18:53:04.979487 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:04Z","lastTransitionTime":"2025-12-08T18:53:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.081802 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.081856 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.081868 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.081888 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.081900 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.184659 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.184922 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.185059 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.185133 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.185223 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.288034 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.288118 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.288138 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.288164 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.288184 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.365453 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:05 crc kubenswrapper[4998]: E1208 18:53:05.366323 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.391492 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.391626 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.391643 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.391663 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.391678 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.495320 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.495454 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.495482 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.495548 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.495577 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.598666 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.599261 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.599390 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.599489 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.599573 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.702660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.702804 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.702837 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.702867 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.702893 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.805725 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.806423 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.806494 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.806526 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.806544 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.910305 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.910874 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.910950 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.911020 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:05 crc kubenswrapper[4998]: I1208 18:53:05.911127 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:05Z","lastTransitionTime":"2025-12-08T18:53:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.014346 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.014431 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.014446 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.014466 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.014479 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.117046 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.117155 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.117191 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.117226 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.117252 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.219824 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.219892 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.219906 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.219927 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.219938 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.322279 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.322819 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.322891 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.322959 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.323029 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.365841 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.365865 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.365958 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:06 crc kubenswrapper[4998]: E1208 18:53:06.366941 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:06 crc kubenswrapper[4998]: E1208 18:53:06.367109 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:06 crc kubenswrapper[4998]: E1208 18:53:06.367286 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.426729 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.426805 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.426818 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.426844 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.426857 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.530150 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.530242 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.530331 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.530368 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.530398 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.633524 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.634375 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.634488 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.634628 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.634744 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.737673 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.738944 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.739069 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.739102 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.739117 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.842281 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.842352 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.842365 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.842385 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.842398 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.945395 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.945468 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.945480 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.945501 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:06 crc kubenswrapper[4998]: I1208 18:53:06.945513 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:06Z","lastTransitionTime":"2025-12-08T18:53:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.049263 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.049370 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.049400 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.049446 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.049472 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.152479 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.152540 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.152559 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.152578 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.152593 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.255348 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.255497 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.255534 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.255562 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.255574 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.358605 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.358669 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.358679 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.358718 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.358765 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.366386 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:07 crc kubenswrapper[4998]: E1208 18:53:07.366720 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.407104 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.435350 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.467966 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.468040 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.468053 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.468094 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.468065 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.468108 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.490122 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.517022 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.534439 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.554070 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.568257 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.570778 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.570848 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.570867 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.570887 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.570903 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.590055 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.607121 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.622250 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.637516 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.651429 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.671994 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.673310 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.673406 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.673418 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.673436 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.673448 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.688418 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.699365 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.710503 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.721679 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.735182 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.776305 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.776367 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.776379 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.776397 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.776409 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.880121 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.880201 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.880215 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.880236 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.880251 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.983017 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.983092 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.983102 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.983119 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:07 crc kubenswrapper[4998]: I1208 18:53:07.983133 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:07Z","lastTransitionTime":"2025-12-08T18:53:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.086217 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.086281 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.086295 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.086316 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.086327 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.189239 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.189318 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.189330 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.189352 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.189365 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.293075 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.293153 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.293167 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.293193 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.293209 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.365419 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.365480 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.365646 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.365723 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.365872 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.366004 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.397004 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.397076 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.397088 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.397115 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.397129 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.500010 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.500065 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.500075 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.500096 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.500107 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.603553 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.603631 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.603647 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.603664 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.603676 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.700943 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.701114 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701208 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.701163362 +0000 UTC m=+100.349206072 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701263 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.701335 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701415 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701447 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.701393997 +0000 UTC m=+100.349436707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701484 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.701470599 +0000 UTC m=+100.349513289 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.701529 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.701600 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701874 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701888 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701902 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.701933 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.701923882 +0000 UTC m=+100.349966562 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.702342 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.702371 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.702381 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.702419 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.702407465 +0000 UTC m=+100.350450155 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.706947 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.707028 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.707043 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.707066 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.707080 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.802844 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.803010 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: E1208 18:53:08.803100 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:16.803077639 +0000 UTC m=+100.451120329 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.809513 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.809583 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.809600 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.809622 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.809636 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.912751 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.912817 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.912835 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.912857 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:08 crc kubenswrapper[4998]: I1208 18:53:08.912869 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:08Z","lastTransitionTime":"2025-12-08T18:53:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.016361 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.016502 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.016515 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.016532 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.016544 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.119404 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.119473 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.119490 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.119509 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.119522 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.222838 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.222920 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.222939 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.222968 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.222987 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.325660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.325760 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.325780 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.325806 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.325822 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.366091 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.366571 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.427844 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.428273 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.428374 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.428465 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.428561 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.531856 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.531932 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.531948 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.532011 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.532032 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.634182 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.634577 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.634673 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.634906 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.635560 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.726317 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.726379 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.726393 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.726412 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.726424 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.743150 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.749303 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.749386 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.749405 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.749430 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.749447 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.765282 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.770057 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.770126 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.770140 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.770160 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.770174 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.784608 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.789618 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.789671 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.789701 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.789719 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.789730 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.803553 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.808505 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.808742 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.808861 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.808956 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.809038 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.821839 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"7800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"24143980Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"8\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"24604780Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:09Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"c1301796-dc2b-4ae4-9d55-0e992e137827\\\",\\\"systemUUID\\\":\\\"57933dbd-5a28-4dc8-9ba9-34a04e3c67e1\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:09 crc kubenswrapper[4998]: E1208 18:53:09.821973 4998 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.823867 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.823917 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.823930 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.823946 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.823956 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.927470 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.927556 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.927569 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.927602 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:09 crc kubenswrapper[4998]: I1208 18:53:09.927615 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:09Z","lastTransitionTime":"2025-12-08T18:53:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.030869 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.030980 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.030995 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.031017 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.031033 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.133222 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.133373 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.133450 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.133475 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.133488 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.236896 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.236966 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.236981 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.237001 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.237015 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.339969 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.340042 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.340067 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.340098 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.340125 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.365665 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.366014 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.366081 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:10 crc kubenswrapper[4998]: E1208 18:53:10.366262 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:10 crc kubenswrapper[4998]: E1208 18:53:10.366379 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:10 crc kubenswrapper[4998]: E1208 18:53:10.366600 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.442880 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.442976 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.443005 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.443025 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.443034 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.545241 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.545327 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.545352 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.545387 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.545413 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.648646 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.648739 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.648755 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.648776 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.648792 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.751540 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.751607 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.751626 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.751658 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.751676 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.853433 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.853488 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.853502 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.853521 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.853533 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.956042 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.956113 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.956132 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.956157 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:10 crc kubenswrapper[4998]: I1208 18:53:10.956174 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:10Z","lastTransitionTime":"2025-12-08T18:53:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.058235 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.058298 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.058315 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.058337 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.058352 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.160287 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.160326 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.160337 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.160354 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.160364 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.263319 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.263373 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.263386 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.263403 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.263438 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.365578 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.365644 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.365660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.365679 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.365713 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.373640 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:11 crc kubenswrapper[4998]: E1208 18:53:11.374276 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.467852 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.467890 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.467899 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.467913 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.467922 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.569476 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.569518 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.569527 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.569541 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.569550 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.671957 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.672010 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.672021 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.672037 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.672046 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.774337 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.774405 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.774432 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.774503 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.774530 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.876717 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.876789 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.876805 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.876824 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.876836 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.978888 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.978931 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.978940 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.978956 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:11 crc kubenswrapper[4998]: I1208 18:53:11.978965 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:11Z","lastTransitionTime":"2025-12-08T18:53:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.081291 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.081371 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.081385 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.081404 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.081419 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.183083 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.183151 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.183165 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.183181 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.183192 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.285459 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.285536 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.285557 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.285586 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.285610 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.365106 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:12 crc kubenswrapper[4998]: E1208 18:53:12.365264 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.365298 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.365442 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:12 crc kubenswrapper[4998]: E1208 18:53:12.365544 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:12 crc kubenswrapper[4998]: E1208 18:53:12.365621 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:12 crc kubenswrapper[4998]: E1208 18:53:12.368291 4998 kuberuntime_manager.go:1358] "Unhandled Error" err=< Dec 08 18:53:12 crc kubenswrapper[4998]: container &Container{Name:network-operator,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,Command:[/bin/bash -c #!/bin/bash Dec 08 18:53:12 crc kubenswrapper[4998]: set -o allexport Dec 08 18:53:12 crc kubenswrapper[4998]: if [[ -f /etc/kubernetes/apiserver-url.env ]]; then Dec 08 18:53:12 crc kubenswrapper[4998]: source /etc/kubernetes/apiserver-url.env Dec 08 18:53:12 crc kubenswrapper[4998]: else Dec 08 18:53:12 crc kubenswrapper[4998]: echo "Error: /etc/kubernetes/apiserver-url.env is missing" Dec 08 18:53:12 crc kubenswrapper[4998]: exit 1 Dec 08 18:53:12 crc kubenswrapper[4998]: fi Dec 08 18:53:12 crc kubenswrapper[4998]: exec /usr/bin/cluster-network-operator start --listen=0.0.0.0:9104 Dec 08 18:53:12 crc kubenswrapper[4998]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:cno,HostPort:9104,ContainerPort:9104,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:RELEASE_VERSION,Value:4.20.1,ValueFrom:nil,},EnvVar{Name:KUBE_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:951276a60f15185a05902cf1ec49b6db3e4f049ec638828b336aed496f8dfc45,ValueFrom:nil,},EnvVar{Name:KUBE_RBAC_PROXY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5,ValueFrom:nil,},EnvVar{Name:MULTUS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05,ValueFrom:nil,},EnvVar{Name:MULTUS_ADMISSION_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b5000f8f055fd8f734ef74afbd9bd5333a38345cbc4959ddaad728b8394bccd4,ValueFrom:nil,},EnvVar{Name:CNI_PLUGINS_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d,ValueFrom:nil,},EnvVar{Name:BOND_CNI_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782,ValueFrom:nil,},EnvVar{Name:WHEREABOUTS_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0,ValueFrom:nil,},EnvVar{Name:ROUTE_OVERRRIDE_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4,ValueFrom:nil,},EnvVar{Name:MULTUS_NETWORKPOLICY_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be136d591a0eeb3f7bedf04aabb5481a23b6645316d5cef3cd5be1787344c2b5,ValueFrom:nil,},EnvVar{Name:OVN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122,ValueFrom:nil,},EnvVar{Name:OVN_NB_RAFT_ELECTION_TIMER,Value:10,ValueFrom:nil,},EnvVar{Name:OVN_SB_RAFT_ELECTION_TIMER,Value:16,ValueFrom:nil,},EnvVar{Name:OVN_NORTHD_PROBE_INTERVAL,Value:10000,ValueFrom:nil,},EnvVar{Name:OVN_CONTROLLER_INACTIVITY_PROBE,Value:180000,ValueFrom:nil,},EnvVar{Name:OVN_NB_INACTIVITY_PROBE,Value:60000,ValueFrom:nil,},EnvVar{Name:EGRESS_ROUTER_CNI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a,ValueFrom:nil,},EnvVar{Name:NETWORK_METRICS_DAEMON_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_SOURCE_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_CHECK_TARGET_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:NETWORK_OPERATOR_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b,ValueFrom:nil,},EnvVar{Name:CLOUD_NETWORK_CONFIG_CONTROLLER_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:91997a073272252cac9cd31915ec74217637c55d1abc725107c6eb677ddddc9b,ValueFrom:nil,},EnvVar{Name:CLI_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e,ValueFrom:nil,},EnvVar{Name:FRR_K8S_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a974f04d4aefdb39bf2d4649b24e7e0e87685afa3d07ca46234f1a0c5688e4b,ValueFrom:nil,},EnvVar{Name:NETWORKING_CONSOLE_PLUGIN_IMAGE,Value:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host-etc-kube,ReadOnly:true,MountPath:/etc/kubernetes,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:metrics-tls,ReadOnly:false,MountPath:/var/run/secrets/serving-cert,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7xz2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod network-operator-7bdcf4f5bd-7fjxv_openshift-network-operator(34177974-8d82-49d2-a763-391d0df3bbd8): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Dec 08 18:53:12 crc kubenswrapper[4998]: > logger="UnhandledError" Dec 08 18:53:12 crc kubenswrapper[4998]: E1208 18:53:12.369502 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"network-operator\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" podUID="34177974-8d82-49d2-a763-391d0df3bbd8" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.388096 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.388158 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.388171 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.388190 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.388224 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.490562 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.490607 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.490619 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.490636 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.490648 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.593361 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.593477 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.593513 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.593548 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.593576 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.695927 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.695989 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.696016 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.696042 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.696056 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.798514 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.798611 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.798627 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.798649 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.798662 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.901242 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.901311 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.901327 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.901350 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:12 crc kubenswrapper[4998]: I1208 18:53:12.901365 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:12Z","lastTransitionTime":"2025-12-08T18:53:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.004772 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.004869 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.004891 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.004920 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.004940 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.108357 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.108415 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.108432 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.108461 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.108479 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.210718 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.210769 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.210782 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.210802 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.210812 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.312521 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.312571 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.312583 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.312599 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.312610 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.343012 4998 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.365309 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:13 crc kubenswrapper[4998]: E1208 18:53:13.365461 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.414591 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.414633 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.414643 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.414657 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.414666 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.517266 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.517306 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.517316 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.517330 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.517339 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.619682 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.619746 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.619758 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.619776 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.619789 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.722510 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.722562 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.722575 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.722594 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.722605 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.824432 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.824472 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.824485 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.824501 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.824510 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.927419 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.927471 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.927483 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.927498 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:13 crc kubenswrapper[4998]: I1208 18:53:13.927522 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:13Z","lastTransitionTime":"2025-12-08T18:53:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.029822 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.029879 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.029889 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.029904 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.029914 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.132819 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.132859 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.132878 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.132896 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.132910 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.235934 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.235976 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.235987 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.236002 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.236013 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.337849 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.337890 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.337901 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.337918 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.337929 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.365889 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:14 crc kubenswrapper[4998]: E1208 18:53:14.366107 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.366878 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.367124 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:14 crc kubenswrapper[4998]: E1208 18:53:14.369671 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:14 crc kubenswrapper[4998]: E1208 18:53:14.369782 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.440418 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.440463 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.440477 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.440494 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.440505 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.542937 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.543222 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.543232 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.543249 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.543260 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.694320 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.694620 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.694630 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.694643 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.694652 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.808183 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.808221 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.808271 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.808301 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.808322 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.912246 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.912299 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.912310 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.912329 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:14 crc kubenswrapper[4998]: I1208 18:53:14.912339 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:14Z","lastTransitionTime":"2025-12-08T18:53:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.015122 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.015192 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.015207 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.015227 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.015239 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.027927 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lmsm8" event={"ID":"2cc4703c-51fa-4a35-ab04-0a6028035fb2","Type":"ContainerStarted","Data":"e0f05d9fc1aea7f6ceef5b514ccd12fd97796bbe76be57eaa48021bbb3a56daa"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.028890 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerStarted","Data":"57a26de1003f27f86465fa026b6171219cbed6007054284e7e0529b1f0a4eaa1"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.030226 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"aec38036875a407f24a8892d97b36b7cf5455831d61c8351197e7e0352abc72c"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.030258 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.040583 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.070317 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.091921 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.108064 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.117092 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.117142 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.117153 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.117169 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.117180 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.119658 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.129235 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.138556 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f05d9fc1aea7f6ceef5b514ccd12fd97796bbe76be57eaa48021bbb3a56daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.159121 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.168845 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.188709 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.204071 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.219635 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.219913 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.220001 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.220127 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.220203 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.220140 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.232959 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.244694 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.254803 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.263274 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.287146 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.297510 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.306091 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.315275 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f05d9fc1aea7f6ceef5b514ccd12fd97796bbe76be57eaa48021bbb3a56daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.322536 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.322588 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.322599 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.322615 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.322626 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.331464 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a26de1003f27f86465fa026b6171219cbed6007054284e7e0529b1f0a4eaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.339457 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.354797 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.365118 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:15 crc kubenswrapper[4998]: E1208 18:53:15.365260 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.396381 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.424599 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.424648 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.424660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.424674 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.424687 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.475382 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.493237 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.508157 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.518660 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.530192 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.530243 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.530254 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.530271 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.530282 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.535629 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.545658 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-wjfn5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f0a43997-c346-42c7-a485-b2b55c22c9c6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-26tn9\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-wjfn5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.555823 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4efefa79-ffc6-4211-84df-8feef5c66eba\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://355883c3b875ba7df515d5d07538ec1a017d38a87bf6cbef9f6a939b1b0f860c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://83458962d98a0db15939e11f6ac7a1f814ac5cf95aec1adc4993753182d9e348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://dd499714a3956c76fc95cf29eb557f332ab8a3d8927878cd076ed6fe0b97da75\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2a36fc90db7f470ed87ddfc26c02827bfaca7498daabcfd105cf9e98e314b2d2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.568487 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0da1f94e-cb48-4dc3-ac19-c4b1cb4dbc24\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://486121fa7a66609e79a4ec8139d2aadefdc5b8d1ed0c710a77116e00e8a28078\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4e12c445418b8e4c457987d93bfd66f05fffe4b76c9b39efeaa95bba87aa2f6f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.579385 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ab88c832-775d-46c6-9167-aa51d0574b17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-brvct\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-z9wmf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.601878 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.612833 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec38036875a407f24a8892d97b36b7cf5455831d61c8351197e7e0352abc72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.624815 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.632239 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.632269 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.632280 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.632295 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.632307 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.645795 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.662001 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.733894 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.733939 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.733947 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.733966 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.733975 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.836240 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.836275 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.836284 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.836297 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:15 crc kubenswrapper[4998]: I1208 18:53:15.836306 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:15Z","lastTransitionTime":"2025-12-08T18:53:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.026223 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.026259 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.026271 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.026287 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.026298 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.052846 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"9f4d249237843ab9d786be22128cac5b12d406f734de947a1a3d3db1b2dcb15c"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.052898 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"01412ceac215b156e67df7230ca38f71c2b2ef4f6c25c91f8398e38784124748"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.056086 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="57a26de1003f27f86465fa026b6171219cbed6007054284e7e0529b1f0a4eaa1" exitCode=0 Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.056145 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"57a26de1003f27f86465fa026b6171219cbed6007054284e7e0529b1f0a4eaa1"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.074279 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"806263eb-1527-40bc-9f4d-dbaa9ccae40c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://0c942e31b7f72b227a9141eaf2ee6242a4dbc108456141bead3be47ffa2f27fe\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://1476b6920197987b99a00525b0a441534c7e99761ef0ad391b5f435c1231b81f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://c71949fd8738b42d1a0a31ed86e69ad0b49bd0162b001f6989807ae7a9857cd2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://068f90405bb4193555bd06e4131625d8f257d7eafe07c9a08f1783d056d08533\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:43Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ef2041c63fef2c072d9a88e8018244220632031004c32ddf2fa8cec5189e80fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:42Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5d988588fc0a226234c348254412f1da7f90e333e8d67b7dfc26c1e726d56267\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://49108ab81ac383e0049e4560d63ae02e9b3570959c06dcb675775a6eadbe4eab\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53fb057db4503c0ed7175d8d019d90beb140d7e04e5d31399b725427b0ff472c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:40Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.087796 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0c186590-6bde-4b05-ac4d-9e6f0e656d17\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://aec38036875a407f24a8892d97b36b7cf5455831d61c8351197e7e0352abc72c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-mbwzm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-gwq5q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.117321 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-72nfz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gdw2c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-72nfz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.140770 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.140914 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.140946 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.140994 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.141031 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.244632 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3bbf1d2f-fd23-4a18-96bc-cfec142c5909\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-12-08T18:52:49Z\\\",\\\"message\\\":\\\"172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"InformerResourceVersion\\\\\\\" enabled=false\\\\nW1208 18:52:49.039997 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI1208 18:52:49.040168 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI1208 18:52:49.041022 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2686458259/tls.crt::/tmp/serving-cert-2686458259/tls.key\\\\\\\"\\\\nI1208 18:52:49.681709 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI1208 18:52:49.685354 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI1208 18:52:49.685372 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI1208 18:52:49.685429 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI1208 18:52:49.685439 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI1208 18:52:49.690533 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nI1208 18:52:49.690550 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW1208 18:52:49.690565 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690572 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW1208 18:52:49.690577 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW1208 18:52:49.690580 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW1208 18:52:49.690583 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW1208 18:52:49.690586 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF1208 18:52:49.693927 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-12-08T18:52:48Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:41Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-12-08T18:51:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.245545 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.245590 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.245606 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.245625 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.245715 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.262157 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:16Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://9f4d249237843ab9d786be22128cac5b12d406f734de947a1a3d3db1b2dcb15c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://01412ceac215b156e67df7230ca38f71c2b2ef4f6c25c91f8398e38784124748\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:15Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0,1000500000],\\\"uid\\\":1000500000}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.276634 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lmsm8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2cc4703c-51fa-4a35-ab04-0a6028035fb2\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"},\\\"containerID\\\":\\\"cri-o://e0f05d9fc1aea7f6ceef5b514ccd12fd97796bbe76be57eaa48021bbb3a56daa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"10Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":1001}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-n2qzp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lmsm8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.296433 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"085d31f3-c7fb-4aca-903c-9db17e8d0047\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://57a26de1003f27f86465fa026b6171219cbed6007054284e7e0529b1f0a4eaa1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"resources\\\":{},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2w77\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-9kdnj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.311577 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b8867028-389a-494e-b230-ed29201b63ca\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-pj679\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-ql7xr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.329418 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc7150c6-b180-4712-a5ed-6b25328d0118\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9lvgz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:53:00Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-h7zr9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.340482 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"99ffc072-8f76-4a27-bb7b-b1ff802d45cc\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:52:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-12-08T18:51:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://ee63771cb8a4b1de599a12272e08e0e0b6dc846680731e5ed4e980867824fa30\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:39Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:38Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://8ee7697184c027f44e3d23a60b3701c480d9f83bd5e19541f33ccbbe6b3db564\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7b3ffaab9d25ba7480f7909ed3e81fd2ffdd94b99de2b07efc78b672bd8381c3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-12-08T18:51:40Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-12-08T18:51:37Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.348349 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.348390 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.348402 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.348417 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.348427 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.351538 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"services have not yet been read at least once, cannot construct envvars\\\",\\\"reason\\\":\\\"CreateContainerConfigError\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.367867 4998 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-12-08T18:53:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.368094 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.368180 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.368601 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.369011 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.369054 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.369086 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.369147 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.369294 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.451107 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.451148 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.451157 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.451175 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.451185 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.478251 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=16.478194333 podStartE2EDuration="16.478194333s" podCreationTimestamp="2025-12-08 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.474027352 +0000 UTC m=+100.122070042" watchObservedRunningTime="2025-12-08 18:53:16.478194333 +0000 UTC m=+100.126237023" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.489349 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=16.489335059 podStartE2EDuration="16.489335059s" podCreationTimestamp="2025-12-08 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.487648614 +0000 UTC m=+100.135691314" watchObservedRunningTime="2025-12-08 18:53:16.489335059 +0000 UTC m=+100.137377749" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.570927 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.570963 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.570971 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.570985 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.570996 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.587835 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=16.587819706 podStartE2EDuration="16.587819706s" podCreationTimestamp="2025-12-08 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.586414339 +0000 UTC m=+100.234457049" watchObservedRunningTime="2025-12-08 18:53:16.587819706 +0000 UTC m=+100.235862396" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.617005 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podStartSLOduration=72.616990251 podStartE2EDuration="1m12.616990251s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.601244892 +0000 UTC m=+100.249287582" watchObservedRunningTime="2025-12-08 18:53:16.616990251 +0000 UTC m=+100.265032931" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.665289 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lmsm8" podStartSLOduration=72.665270004 podStartE2EDuration="1m12.665270004s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.665018747 +0000 UTC m=+100.313061437" watchObservedRunningTime="2025-12-08 18:53:16.665270004 +0000 UTC m=+100.313312684" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.673030 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.673061 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.673099 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.673112 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.673122 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.742934 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=16.742891856 podStartE2EDuration="16.742891856s" podCreationTimestamp="2025-12-08 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:16.740848262 +0000 UTC m=+100.388890972" watchObservedRunningTime="2025-12-08 18:53:16.742891856 +0000 UTC m=+100.390934546" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.744795 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.744886 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.744909 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.744933 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.744952 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.74493102 +0000 UTC m=+116.392973710 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745031 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745046 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745061 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.745094 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745110 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.745096814 +0000 UTC m=+116.393139504 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745191 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745212 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.745206647 +0000 UTC m=+116.393249337 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745235 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745236 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745253 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745256 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.745249958 +0000 UTC m=+116.393292648 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745265 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.745344 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.74532818 +0000 UTC m=+116.393370870 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.780637 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.780732 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.780745 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.780758 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.780769 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.846192 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.846315 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: E1208 18:53:16.846380 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:32.846362804 +0000 UTC m=+116.494405494 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.882427 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.882474 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.882483 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.882498 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.882506 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.984988 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.985032 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.985043 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.985056 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:16 crc kubenswrapper[4998]: I1208 18:53:16.985065 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:16Z","lastTransitionTime":"2025-12-08T18:53:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.060615 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"b1da223b45febc580854c9fb1caf0cb1ab08ae12a1aacf6426874f7c2810cd0e"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.066032 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerStarted","Data":"a428cbe884e9a8c1a4e967a869c014d88685abea8dd3081ac162159781b721f2"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.098605 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.098660 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.098670 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.098706 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.098717 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.222782 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.222825 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.222837 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.222856 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.222868 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.327810 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.327847 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.327858 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.327871 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.327881 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.366810 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:17 crc kubenswrapper[4998]: E1208 18:53:17.366914 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.431999 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.432046 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.432058 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.432075 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.432087 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.539957 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.540208 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.540544 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.540575 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.540586 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.643521 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.643565 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.643575 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.643591 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.643602 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.746253 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.746317 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.746334 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.746355 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.746368 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.849032 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.849084 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.849096 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.849112 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.849124 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.950738 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.950784 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.950794 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.950811 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:17 crc kubenswrapper[4998]: I1208 18:53:17.950820 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:17Z","lastTransitionTime":"2025-12-08T18:53:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.054803 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.055273 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.055283 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.055305 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.055329 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.093023 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-wjfn5" event={"ID":"f0a43997-c346-42c7-a485-b2b55c22c9c6","Type":"ContainerStarted","Data":"21389d1ad6cf56f6951a7c2ab22599d58ae33570b1808c22458171e7c7c80228"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.095030 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4" exitCode=0 Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.095086 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.097621 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-72nfz" event={"ID":"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa","Type":"ContainerStarted","Data":"cf8eb80c08729777b822ab3758bd12c4310a87e1949d64d6bb3f074c45ec7fbd"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.123632 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerStarted","Data":"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.123689 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerStarted","Data":"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.130354 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-wjfn5" podStartSLOduration=74.130339657 podStartE2EDuration="1m14.130339657s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:18.129162617 +0000 UTC m=+101.777205317" watchObservedRunningTime="2025-12-08 18:53:18.130339657 +0000 UTC m=+101.778382347" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.167498 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.167542 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.167554 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.167573 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.167585 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.169221 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-72nfz" podStartSLOduration=74.16920762 podStartE2EDuration="1m14.16920762s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:18.150193296 +0000 UTC m=+101.798235996" watchObservedRunningTime="2025-12-08 18:53:18.16920762 +0000 UTC m=+101.817250310" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.202334 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" podStartSLOduration=73.20230666 podStartE2EDuration="1m13.20230666s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:18.169379494 +0000 UTC m=+101.817422194" watchObservedRunningTime="2025-12-08 18:53:18.20230666 +0000 UTC m=+101.850349360" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.269735 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.269777 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.269791 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.269808 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.269819 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.365347 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:18 crc kubenswrapper[4998]: E1208 18:53:18.365472 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.365599 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.365618 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:18 crc kubenswrapper[4998]: E1208 18:53:18.365913 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:18 crc kubenswrapper[4998]: E1208 18:53:18.365994 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.372414 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.372456 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.372466 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.372480 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.372491 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.474434 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.474992 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.475010 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.475030 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.475040 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.587102 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.587155 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.587169 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.587185 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.587194 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.689806 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.689860 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.689871 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.689886 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.689898 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.792899 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.792941 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.792952 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.792971 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.792986 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.894768 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.894828 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.894841 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.894858 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.894873 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.997324 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.997392 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.997406 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.997433 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:18 crc kubenswrapper[4998]: I1208 18:53:18.997464 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:18Z","lastTransitionTime":"2025-12-08T18:53:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.099391 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.099446 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.099460 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.099476 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.099488 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.129426 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="a428cbe884e9a8c1a4e967a869c014d88685abea8dd3081ac162159781b721f2" exitCode=0 Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.129514 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"a428cbe884e9a8c1a4e967a869c014d88685abea8dd3081ac162159781b721f2"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138275 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138322 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138333 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138343 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138353 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.138364 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.202467 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.202513 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.202525 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.202543 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.202555 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.304418 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.304461 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.304472 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.304487 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.304497 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.373246 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.373805 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:19 crc kubenswrapper[4998]: E1208 18:53:19.374046 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:19 crc kubenswrapper[4998]: E1208 18:53:19.374112 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.406908 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.406957 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.406966 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.406982 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.406994 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.604403 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.604456 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.604470 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.604490 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.604503 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.711563 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.711619 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.711628 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.711643 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.711652 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.814335 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.814596 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.814606 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.814622 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.814631 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.917345 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.917386 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.917394 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.917411 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:19 crc kubenswrapper[4998]: I1208 18:53:19.917420 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:19Z","lastTransitionTime":"2025-12-08T18:53:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.019565 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.019607 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.019616 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.019631 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.019640 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:20Z","lastTransitionTime":"2025-12-08T18:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.038121 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.038168 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.038180 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.038205 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.038218 4998 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-12-08T18:53:20Z","lastTransitionTime":"2025-12-08T18:53:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.147465 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="0815b24e36dae675c2a9b8ebae9d4e06b40e291fe0992897dde0defe73d150cf" exitCode=0 Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.147577 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"0815b24e36dae675c2a9b8ebae9d4e06b40e291fe0992897dde0defe73d150cf"} Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.178668 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6"] Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.276266 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.278374 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.278769 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.279064 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.280928 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.327270 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.327428 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.327632 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.327840 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.327887 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.365217 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:20 crc kubenswrapper[4998]: E1208 18:53:20.365388 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.365799 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:20 crc kubenswrapper[4998]: E1208 18:53:20.365858 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.429848 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430057 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430096 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430144 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430173 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430590 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.430753 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.435444 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.448136 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.449854 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6b141ca5-d307-46b2-a29c-0e4597d8f6cf-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-vpzj6\" (UID: \"6b141ca5-d307-46b2-a29c-0e4597d8f6cf\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.597271 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" Dec 08 18:53:20 crc kubenswrapper[4998]: W1208 18:53:20.615429 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b141ca5_d307_46b2_a29c_0e4597d8f6cf.slice/crio-3f19613d0303ab31919484582f1ff95b6a50b15c782302e3eacdb11070c0b316 WatchSource:0}: Error finding container 3f19613d0303ab31919484582f1ff95b6a50b15c782302e3eacdb11070c0b316: Status 404 returned error can't find the container with id 3f19613d0303ab31919484582f1ff95b6a50b15c782302e3eacdb11070c0b316 Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.856069 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 18:53:20 crc kubenswrapper[4998]: I1208 18:53:20.870264 4998 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.152555 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="b4be858f10825fc10a46c5efcaf0bdf86c224de63038feceea4702422d9f0596" exitCode=0 Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.152610 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"b4be858f10825fc10a46c5efcaf0bdf86c224de63038feceea4702422d9f0596"} Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.153512 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" event={"ID":"6b141ca5-d307-46b2-a29c-0e4597d8f6cf","Type":"ContainerStarted","Data":"fd263cfc1e659c72492a36dc15ce95cae4c383dc022eb6beba80252845100ff9"} Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.153785 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" event={"ID":"6b141ca5-d307-46b2-a29c-0e4597d8f6cf","Type":"ContainerStarted","Data":"3f19613d0303ab31919484582f1ff95b6a50b15c782302e3eacdb11070c0b316"} Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.367112 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:21 crc kubenswrapper[4998]: E1208 18:53:21.367240 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:21 crc kubenswrapper[4998]: I1208 18:53:21.367325 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:21 crc kubenswrapper[4998]: E1208 18:53:21.367393 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:22 crc kubenswrapper[4998]: I1208 18:53:22.164318 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerStarted","Data":"bf4a8848c9ac741f66987ad838c4e364bb3825c6b961d093842f3122765ae228"} Dec 08 18:53:22 crc kubenswrapper[4998]: I1208 18:53:22.167444 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10"} Dec 08 18:53:22 crc kubenswrapper[4998]: I1208 18:53:22.194672 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-vpzj6" podStartSLOduration=78.194652019 podStartE2EDuration="1m18.194652019s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:21.202971202 +0000 UTC m=+104.851013892" watchObservedRunningTime="2025-12-08 18:53:22.194652019 +0000 UTC m=+105.842694709" Dec 08 18:53:22 crc kubenswrapper[4998]: I1208 18:53:22.366139 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:22 crc kubenswrapper[4998]: I1208 18:53:22.366139 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:22 crc kubenswrapper[4998]: E1208 18:53:22.366463 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:22 crc kubenswrapper[4998]: E1208 18:53:22.366633 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:23 crc kubenswrapper[4998]: I1208 18:53:23.369344 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:23 crc kubenswrapper[4998]: E1208 18:53:23.369540 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:23 crc kubenswrapper[4998]: I1208 18:53:23.369731 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:23 crc kubenswrapper[4998]: E1208 18:53:23.369872 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.290766 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerStarted","Data":"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4"} Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.291388 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.291411 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.291421 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.334654 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.338316 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.365285 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:24 crc kubenswrapper[4998]: E1208 18:53:24.365422 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.365465 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:24 crc kubenswrapper[4998]: E1208 18:53:24.365588 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:24 crc kubenswrapper[4998]: I1208 18:53:24.384678 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podStartSLOduration=80.384657093 podStartE2EDuration="1m20.384657093s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:24.345706179 +0000 UTC m=+107.993748879" watchObservedRunningTime="2025-12-08 18:53:24.384657093 +0000 UTC m=+108.032699783" Dec 08 18:53:25 crc kubenswrapper[4998]: I1208 18:53:25.365654 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:25 crc kubenswrapper[4998]: E1208 18:53:25.365822 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:25 crc kubenswrapper[4998]: I1208 18:53:25.365889 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:25 crc kubenswrapper[4998]: E1208 18:53:25.366053 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:26 crc kubenswrapper[4998]: I1208 18:53:26.302587 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="bf4a8848c9ac741f66987ad838c4e364bb3825c6b961d093842f3122765ae228" exitCode=0 Dec 08 18:53:26 crc kubenswrapper[4998]: I1208 18:53:26.302668 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"bf4a8848c9ac741f66987ad838c4e364bb3825c6b961d093842f3122765ae228"} Dec 08 18:53:26 crc kubenswrapper[4998]: I1208 18:53:26.365610 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:26 crc kubenswrapper[4998]: E1208 18:53:26.365845 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:26 crc kubenswrapper[4998]: I1208 18:53:26.366539 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:26 crc kubenswrapper[4998]: E1208 18:53:26.366637 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:27 crc kubenswrapper[4998]: I1208 18:53:27.309859 4998 generic.go:358] "Generic (PLEG): container finished" podID="085d31f3-c7fb-4aca-903c-9db17e8d0047" containerID="a0c9ef10a4773d66cd9570f43789339646eba9f2b33068b011381d665dcafb19" exitCode=0 Dec 08 18:53:27 crc kubenswrapper[4998]: I1208 18:53:27.309968 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerDied","Data":"a0c9ef10a4773d66cd9570f43789339646eba9f2b33068b011381d665dcafb19"} Dec 08 18:53:27 crc kubenswrapper[4998]: I1208 18:53:27.365293 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:27 crc kubenswrapper[4998]: I1208 18:53:27.365327 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:27 crc kubenswrapper[4998]: E1208 18:53:27.365490 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:27 crc kubenswrapper[4998]: E1208 18:53:27.365926 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.315835 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"90e3b39510d6cfa950bfd81960d42ab4b41b37f5b51304aa447ff134159a0755"} Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.320093 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" event={"ID":"085d31f3-c7fb-4aca-903c-9db17e8d0047","Type":"ContainerStarted","Data":"693d701e42362013e3125dd54e79d6c8a6317d2b35c1707a079e5d4c4594ba68"} Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.365083 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:28 crc kubenswrapper[4998]: E1208 18:53:28.365244 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.365455 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:28 crc kubenswrapper[4998]: E1208 18:53:28.365523 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.439434 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-9kdnj" podStartSLOduration=84.43941097 podStartE2EDuration="1m24.43941097s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:28.376321274 +0000 UTC m=+112.024363984" watchObservedRunningTime="2025-12-08 18:53:28.43941097 +0000 UTC m=+112.087453660" Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.440751 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z9wmf"] Dec 08 18:53:28 crc kubenswrapper[4998]: I1208 18:53:28.440837 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:28 crc kubenswrapper[4998]: E1208 18:53:28.440921 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:29 crc kubenswrapper[4998]: I1208 18:53:29.365782 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:29 crc kubenswrapper[4998]: E1208 18:53:29.366200 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:29 crc kubenswrapper[4998]: I1208 18:53:29.366362 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:53:29 crc kubenswrapper[4998]: E1208 18:53:29.366573 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Dec 08 18:53:30 crc kubenswrapper[4998]: I1208 18:53:30.365490 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:30 crc kubenswrapper[4998]: I1208 18:53:30.365516 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:30 crc kubenswrapper[4998]: I1208 18:53:30.365530 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:30 crc kubenswrapper[4998]: E1208 18:53:30.365633 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:30 crc kubenswrapper[4998]: E1208 18:53:30.365755 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:30 crc kubenswrapper[4998]: E1208 18:53:30.365882 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:31 crc kubenswrapper[4998]: I1208 18:53:31.365467 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:31 crc kubenswrapper[4998]: E1208 18:53:31.365759 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.365204 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.365269 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.365205 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.365416 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.365658 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.366007 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-z9wmf" podUID="ab88c832-775d-46c6-9167-aa51d0574b17" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.782919 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.783036 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.783062 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.783079 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.783121 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783190 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.783159686 +0000 UTC m=+148.431202376 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783236 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783251 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783267 4998 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783291 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783310 4998 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783319 4998 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783324 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.78330554 +0000 UTC m=+148.431348230 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783350 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.783341051 +0000 UTC m=+148.431383741 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783233 4998 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783377 4998 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783379 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.783372352 +0000 UTC m=+148.431415032 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.783434 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.783427313 +0000 UTC m=+148.431470003 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: I1208 18:53:32.885055 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.885325 4998 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:32 crc kubenswrapper[4998]: E1208 18:53:32.885465 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs podName:ab88c832-775d-46c6-9167-aa51d0574b17 nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.885436083 +0000 UTC m=+148.533478953 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs") pod "network-metrics-daemon-z9wmf" (UID: "ab88c832-775d-46c6-9167-aa51d0574b17") : object "openshift-multus"/"metrics-daemon-secret" not registered Dec 08 18:53:33 crc kubenswrapper[4998]: I1208 18:53:33.365147 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:33 crc kubenswrapper[4998]: E1208 18:53:33.365408 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Dec 08 18:53:33 crc kubenswrapper[4998]: I1208 18:53:33.882135 4998 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Dec 08 18:53:33 crc kubenswrapper[4998]: I1208 18:53:33.882314 4998 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Dec 08 18:53:33 crc kubenswrapper[4998]: I1208 18:53:33.928153 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.727934 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-ln56w"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.732631 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.733075 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.734876 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.738509 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mv699"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.741520 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.741520 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742071 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742246 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742280 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5ht5v"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742485 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742478 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742729 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.742756 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.743006 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.743336 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.743955 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.744160 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.744235 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.748947 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.749826 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.753250 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-chjpp"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754303 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754373 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754324 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754324 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754793 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.754966 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.755110 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.755278 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.758085 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.758431 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.758599 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.758907 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759064 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759226 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759403 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759571 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759753 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759840 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759915 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759986 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.759991 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.760201 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.760319 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.761238 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.766145 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8f7f5"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.767745 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.768341 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.768482 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.768753 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.769145 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.769336 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.769525 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.769658 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.770372 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.770623 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.773635 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.774386 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.774520 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.774982 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.775395 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.775394 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.782017 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.782362 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.784646 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.785747 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.785966 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.786081 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.786111 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.786310 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.786542 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.786770 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.787194 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.787394 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.787516 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.787733 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.787897 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788034 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788120 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788181 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788306 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788603 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.790521 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.790821 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.788063 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.791392 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.791650 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.791861 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.792015 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.792316 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.795161 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.795207 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.796949 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cz726"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.797279 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.820165 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.820901 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.821581 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823276 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823333 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fhfb\" (UniqueName: \"kubernetes.io/projected/2febfd9e-52ad-411f-96d1-50b478dbeaa1-kube-api-access-4fhfb\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823357 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-image-import-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823377 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr5br\" (UniqueName: \"kubernetes.io/projected/7093463d-f312-4923-b2b6-bdaeac386011-kube-api-access-hr5br\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823446 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-dir\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823472 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-auth-proxy-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823595 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-client\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823639 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-client\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823734 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-serving-cert\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823798 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-policies\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823830 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-audit\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823865 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-encryption-config\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823893 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-encryption-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823926 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823965 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2febfd9e-52ad-411f-96d1-50b478dbeaa1-machine-approver-tls\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.823990 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824027 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcb5g\" (UniqueName: \"kubernetes.io/projected/baa20693-033c-48d7-b6d1-dbe6a846988f-kube-api-access-hcb5g\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824062 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824110 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824145 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-serving-ca\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824226 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npz68\" (UniqueName: \"kubernetes.io/projected/7cba21c5-f3df-4d04-83db-2571902f2bff-kube-api-access-npz68\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824263 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qqxm8\" (UniqueName: \"kubernetes.io/projected/0f532410-7407-41fe-b95e-d1a785d4ebfe-kube-api-access-qqxm8\") pod \"downloads-747b44746d-ln56w\" (UID: \"0f532410-7407-41fe-b95e-d1a785d4ebfe\") " pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824297 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824327 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-serving-cert\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824349 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-audit-dir\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.824374 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7093463d-f312-4923-b2b6-bdaeac386011-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.828769 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.828955 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.828982 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.829299 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.829582 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.829610 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.829810 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.829951 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830143 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830218 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830320 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830474 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830508 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830772 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.830887 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.835330 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.835448 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.835951 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-6trs2"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.836296 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.836348 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.838239 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.839400 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.839678 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.841141 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.844040 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-m9nr7"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.844425 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.845849 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.845931 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.846119 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.846252 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.846421 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.846564 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.846808 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.847280 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.848034 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.849575 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.850009 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.851186 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.852231 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.855603 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.856134 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.857730 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.858317 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.866439 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.874502 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.874769 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.875031 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.880120 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.882379 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.883953 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.889275 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.889458 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.893561 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.893714 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.893727 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.896763 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.896880 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.899555 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.899710 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.902538 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.905143 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.909881 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.910064 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.915822 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.916026 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.915864 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.921935 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-28nnk"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.922160 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925471 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-serving-ca\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925530 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5196d8a-8e2f-4e51-8c30-0553f127a401-config\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925570 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b7afbd75-761b-4b21-832f-8aeba8f7802f-available-featuregates\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925598 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925676 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-config\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925737 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925805 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-images\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925883 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925916 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925947 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77fa53a8-054b-49f0-8892-3a31f83195cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.925984 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926051 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-npz68\" (UniqueName: \"kubernetes.io/projected/7cba21c5-f3df-4d04-83db-2571902f2bff-kube-api-access-npz68\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926082 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qqxm8\" (UniqueName: \"kubernetes.io/projected/0f532410-7407-41fe-b95e-d1a785d4ebfe-kube-api-access-qqxm8\") pod \"downloads-747b44746d-ln56w\" (UID: \"0f532410-7407-41fe-b95e-d1a785d4ebfe\") " pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926129 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926160 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926188 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-serving-cert\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926210 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-audit-dir\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926229 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7093463d-f312-4923-b2b6-bdaeac386011-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926257 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-srmsp\" (UniqueName: \"kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926307 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926323 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-audit-dir\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926337 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xq25\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-kube-api-access-6xq25\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926383 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-trusted-ca-bundle\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926413 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926446 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn42v\" (UniqueName: \"kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926479 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fhfb\" (UniqueName: \"kubernetes.io/projected/2febfd9e-52ad-411f-96d1-50b478dbeaa1-kube-api-access-4fhfb\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926536 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-serving-ca\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.926654 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-image-import-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927131 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927174 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5196d8a-8e2f-4e51-8c30-0553f127a401-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927248 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7afbd75-761b-4b21-832f-8aeba8f7802f-serving-cert\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927431 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927506 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927565 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927627 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hr5br\" (UniqueName: \"kubernetes.io/projected/7093463d-f312-4923-b2b6-bdaeac386011-kube-api-access-hr5br\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927657 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c82mt\" (UniqueName: \"kubernetes.io/projected/e5196d8a-8e2f-4e51-8c30-0553f127a401-kube-api-access-c82mt\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927712 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-tmp-dir\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927749 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-oauth-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927781 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6vxg\" (UniqueName: \"kubernetes.io/projected/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-kube-api-access-g6vxg\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.927814 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-metrics-tls\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928069 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gqjw\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-kube-api-access-8gqjw\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928139 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928196 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928268 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-image-import-ca\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928293 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4326b45d-f585-474b-8899-ed8c604ca68e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928372 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-service-ca\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928485 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-dir\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928548 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6pff\" (UniqueName: \"kubernetes.io/projected/a3223550-df04-4846-a030-56e1f6763d0b-kube-api-access-f6pff\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928613 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928614 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-auth-proxy-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928681 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4326b45d-f585-474b-8899-ed8c604ca68e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928768 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928795 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kln6j\" (UniqueName: \"kubernetes.io/projected/b7afbd75-761b-4b21-832f-8aeba8f7802f-kube-api-access-kln6j\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928834 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2gtq\" (UniqueName: \"kubernetes.io/projected/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-kube-api-access-f2gtq\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928864 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928886 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928915 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-client\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928946 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-client\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928971 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjhh6\" (UniqueName: \"kubernetes.io/projected/d421eb48-b80b-4051-be83-d48b866bc67b-kube-api-access-vjhh6\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.928998 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929023 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929049 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-trusted-ca\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929081 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-serving-cert\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929106 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d421eb48-b80b-4051-be83-d48b866bc67b-serving-cert\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929131 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-console-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929154 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929183 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-oauth-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929207 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vlhxr\" (UniqueName: \"kubernetes.io/projected/c8df4295-741b-46e6-a5e6-482671155a00-kube-api-access-vlhxr\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929224 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-config\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929265 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929291 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929316 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8df4295-741b-46e6-a5e6-482671155a00-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929328 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-auth-proxy-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929343 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-policies\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929432 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-audit\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929471 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6q2c\" (UniqueName: \"kubernetes.io/projected/e7ea19de-84b9-47ec-8e9b-036995d15ea6-kube-api-access-n6q2c\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929509 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-encryption-config\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929539 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-encryption-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929568 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-serving-cert\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929606 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929653 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929770 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-config\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929835 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929883 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2febfd9e-52ad-411f-96d1-50b478dbeaa1-machine-approver-tls\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929898 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-policies\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929915 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.929971 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hcb5g\" (UniqueName: \"kubernetes.io/projected/baa20693-033c-48d7-b6d1-dbe6a846988f-kube-api-access-hcb5g\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930048 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930076 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5196d8a-8e2f-4e51-8c30-0553f127a401-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930105 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-tmp-dir\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930136 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-config\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930160 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930197 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8df4295-741b-46e6-a5e6-482671155a00-config\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930222 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ea19de-84b9-47ec-8e9b-036995d15ea6-serving-cert\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930245 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930268 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930295 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930329 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930354 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930375 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-client\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930398 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.930424 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kv2pr\" (UniqueName: \"kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931265 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2febfd9e-52ad-411f-96d1-50b478dbeaa1-config\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931477 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931561 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7cba21c5-f3df-4d04-83db-2571902f2bff-trusted-ca-bundle\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931654 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/baa20693-033c-48d7-b6d1-dbe6a846988f-node-pullsecrets\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931895 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-audit\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.931942 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/7cba21c5-f3df-4d04-83db-2571902f2bff-audit-dir\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.932792 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/baa20693-033c-48d7-b6d1-dbe6a846988f-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.933731 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.938523 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.938542 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/7093463d-f312-4923-b2b6-bdaeac386011-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.939960 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/2febfd9e-52ad-411f-96d1-50b478dbeaa1-machine-approver-tls\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.939980 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-serving-cert\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.940000 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-serving-cert\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.941006 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ln56w"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.941068 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.941552 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.942081 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-encryption-config\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.943964 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-encryption-config\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.943982 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/baa20693-033c-48d7-b6d1-dbe6a846988f-etcd-client\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.946364 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7cba21c5-f3df-4d04-83db-2571902f2bff-etcd-client\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.948791 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-hnz2f"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.949017 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.953293 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mv699"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.953338 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.953920 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967311 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5ht5v"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967391 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8f7f5"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967410 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cz726"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967468 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967493 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-chjpp"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967507 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-28nnk"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967520 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-m9nr7"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967542 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967556 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967568 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967582 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967594 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967607 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967628 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-6trs2"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967643 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967658 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967670 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967706 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.967941 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.968166 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.976257 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.976248 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.976612 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.982457 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.982488 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-fct7l"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.982656 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.990855 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.990891 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.991315 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.994672 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-l7dsk"] Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.994999 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:34 crc kubenswrapper[4998]: I1208 18:53:34.997324 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.000264 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.000289 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.000302 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.000512 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003761 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003808 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003819 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003828 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fct7l"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003840 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003870 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003883 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003896 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l7dsk"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003917 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003960 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003971 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003981 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.003992 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2h985"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.004086 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.007137 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-n2lzg"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.007934 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.011817 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.011839 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2h985"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.011851 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n2lzg"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.011861 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-lrwkb"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.012624 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.017737 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.020938 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zmsq7"] Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.021431 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.024723 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032057 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5196d8a-8e2f-4e51-8c30-0553f127a401-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032090 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7afbd75-761b-4b21-832f-8aeba8f7802f-serving-cert\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032114 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032140 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032158 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-tmp-dir\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032784 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e5196d8a-8e2f-4e51-8c30-0553f127a401-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032850 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-tmp-dir\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032836 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-oauth-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032908 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032934 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8gqjw\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-kube-api-access-8gqjw\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032952 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqqvt\" (UniqueName: \"kubernetes.io/projected/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-kube-api-access-vqqvt\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032975 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.032996 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-service-ca\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033025 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6pff\" (UniqueName: \"kubernetes.io/projected/a3223550-df04-4846-a030-56e1f6763d0b-kube-api-access-f6pff\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033043 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033062 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f1df1e3-a061-423c-a925-34521ea004f1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033081 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tksmm\" (UniqueName: \"kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033116 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033135 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033153 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033169 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fzpv\" (UniqueName: \"kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv\") pod \"migrator-866fcbc849-fdmn8\" (UID: \"c7dd20c1-8265-4368-b118-a4a19d492af7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033213 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-trusted-ca\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033240 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-console-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033265 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033294 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vlhxr\" (UniqueName: \"kubernetes.io/projected/c8df4295-741b-46e6-a5e6-482671155a00-kube-api-access-vlhxr\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033323 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033346 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033365 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033385 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8df4295-741b-46e6-a5e6-482671155a00-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033416 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f1df1e3-a061-423c-a925-34521ea004f1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033449 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033467 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-images\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033484 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033522 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5196d8a-8e2f-4e51-8c30-0553f127a401-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033545 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-tmp-dir\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033565 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ea19de-84b9-47ec-8e9b-036995d15ea6-serving-cert\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033583 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033601 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033620 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033639 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-images\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033679 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-config\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033746 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033764 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5196d8a-8e2f-4e51-8c30-0553f127a401-config\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033783 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b7afbd75-761b-4b21-832f-8aeba8f7802f-available-featuregates\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033804 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033828 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033847 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77fa53a8-054b-49f0-8892-3a31f83195cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033870 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-config\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.033906 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-srmsp\" (UniqueName: \"kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.034746 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-service-ca\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.035557 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e5196d8a-8e2f-4e51-8c30-0553f127a401-config\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.036365 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/b7afbd75-761b-4b21-832f-8aeba8f7802f-available-featuregates\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.037542 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.037999 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-trusted-ca\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.038309 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.038857 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.041056 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-console-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.041470 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.041875 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-tmp-dir\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.042419 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-service-ca\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.042862 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.042894 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.042938 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-config\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.042967 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/77fa53a8-054b-49f0-8892-3a31f83195cb-images\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043167 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6xq25\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-kube-api-access-6xq25\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043214 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043257 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043339 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c82mt\" (UniqueName: \"kubernetes.io/projected/e5196d8a-8e2f-4e51-8c30-0553f127a401-kube-api-access-c82mt\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043385 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7913d656-a73a-4352-bc24-bc8f7be42bfd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043415 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6vxg\" (UniqueName: \"kubernetes.io/projected/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-kube-api-access-g6vxg\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043475 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-metrics-tls\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043516 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043553 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4326b45d-f585-474b-8899-ed8c604ca68e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043613 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043645 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4326b45d-f585-474b-8899-ed8c604ca68e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043669 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kln6j\" (UniqueName: \"kubernetes.io/projected/b7afbd75-761b-4b21-832f-8aeba8f7802f-kube-api-access-kln6j\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043754 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043783 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.043811 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2gtq\" (UniqueName: \"kubernetes.io/projected/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-kube-api-access-f2gtq\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.045043 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4326b45d-f585-474b-8899-ed8c604ca68e-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.045667 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.045892 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vjhh6\" (UniqueName: \"kubernetes.io/projected/d421eb48-b80b-4051-be83-d48b866bc67b-kube-api-access-vjhh6\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.045990 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.046074 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.046732 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047069 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d421eb48-b80b-4051-be83-d48b866bc67b-serving-cert\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047360 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-oauth-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047445 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-config\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047527 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047609 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-webhook-certs\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.047611 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.048612 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049189 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049318 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049411 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7gh7\" (UniqueName: \"kubernetes.io/projected/7913d656-a73a-4352-bc24-bc8f7be42bfd-kube-api-access-d7gh7\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049527 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n6q2c\" (UniqueName: \"kubernetes.io/projected/e7ea19de-84b9-47ec-8e9b-036995d15ea6-kube-api-access-n6q2c\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049639 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-serving-cert\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.050249 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-oauth-serving-cert\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.050473 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7ea19de-84b9-47ec-8e9b-036995d15ea6-config\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.049119 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b7afbd75-761b-4b21-832f-8aeba8f7802f-serving-cert\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.050996 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-tmp\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051172 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051272 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/77fa53a8-054b-49f0-8892-3a31f83195cb-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051705 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051706 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-config\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051742 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051765 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.051787 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37e64872-8bc7-4ea1-a674-79240aa5c7bf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052423 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052455 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-config\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052492 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8df4295-741b-46e6-a5e6-482671155a00-config\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052564 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wmh9t\" (UniqueName: \"kubernetes.io/projected/37e64872-8bc7-4ea1-a674-79240aa5c7bf-kube-api-access-wmh9t\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052599 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-config\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052642 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052670 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052746 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kv2pr\" (UniqueName: \"kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.052836 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053110 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d421eb48-b80b-4051-be83-d48b866bc67b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053234 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-client\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053272 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053312 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053332 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1df1e3-a061-423c-a925-34521ea004f1-config\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053419 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053482 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053520 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f1df1e3-a061-423c-a925-34521ea004f1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053596 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.053642 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.054014 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-config\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.054168 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.054578 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d421eb48-b80b-4051-be83-d48b866bc67b-serving-cert\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.054646 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.054990 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.055021 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf0f12dc-bfab-4351-8a07-9d6636b102af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.055628 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-trusted-ca-bundle\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.056314 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cn42v\" (UniqueName: \"kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.056365 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.056402 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.057115 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/a3223550-df04-4846-a030-56e1f6763d0b-console-oauth-config\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.057404 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4326b45d-f585-474b-8899-ed8c604ca68e-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.057417 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3223550-df04-4846-a030-56e1f6763d0b-trusted-ca-bundle\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.058319 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7ea19de-84b9-47ec-8e9b-036995d15ea6-serving-cert\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.059118 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.060313 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-etcd-client\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.060659 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.061101 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.061599 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.061754 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.061918 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.062044 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-serving-cert\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.063868 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.064234 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.065258 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e5196d8a-8e2f-4e51-8c30-0553f127a401-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.075819 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.096073 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.116301 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.124219 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c8df4295-741b-46e6-a5e6-482671155a00-config\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.135184 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.146423 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c8df4295-741b-46e6-a5e6-482671155a00-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.156353 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157453 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-webhook-certs\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157493 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157537 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7gh7\" (UniqueName: \"kubernetes.io/projected/7913d656-a73a-4352-bc24-bc8f7be42bfd-kube-api-access-d7gh7\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157566 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37e64872-8bc7-4ea1-a674-79240aa5c7bf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157591 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wmh9t\" (UniqueName: \"kubernetes.io/projected/37e64872-8bc7-4ea1-a674-79240aa5c7bf-kube-api-access-wmh9t\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157613 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1df1e3-a061-423c-a925-34521ea004f1-config\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157646 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f1df1e3-a061-423c-a925-34521ea004f1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157670 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157719 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf0f12dc-bfab-4351-8a07-9d6636b102af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157743 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157762 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.157948 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158011 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqqvt\" (UniqueName: \"kubernetes.io/projected/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-kube-api-access-vqqvt\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158048 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158067 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f1df1e3-a061-423c-a925-34521ea004f1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158097 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tksmm\" (UniqueName: \"kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158122 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158150 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158171 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzpv\" (UniqueName: \"kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv\") pod \"migrator-866fcbc849-fdmn8\" (UID: \"c7dd20c1-8265-4368-b118-a4a19d492af7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158200 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158222 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158245 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f1df1e3-a061-423c-a925-34521ea004f1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158265 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-images\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158292 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158313 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158341 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-config\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158363 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158388 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158406 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7913d656-a73a-4352-bc24-bc8f7be42bfd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.158442 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.162034 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f1df1e3-a061-423c-a925-34521ea004f1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.162083 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.162334 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.162793 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/37e64872-8bc7-4ea1-a674-79240aa5c7bf-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.162827 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.163074 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.181894 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.196334 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.207297 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-metrics-tls\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.215077 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.235649 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.256362 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.276117 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.296305 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.315369 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.336954 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.342731 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.356447 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.362794 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-config\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.376142 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.397102 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.403248 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.416441 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.422225 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.436578 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.455652 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.466091 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.483459 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.492192 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.495545 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.516143 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.536117 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.555798 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.578056 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.596566 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.602620 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/cf0f12dc-bfab-4351-8a07-9d6636b102af-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.616019 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.634853 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.662271 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.671943 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.675789 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.696406 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.704987 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.716079 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.756840 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.777143 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.797056 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.816801 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.836359 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.855646 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.865757 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9f1df1e3-a061-423c-a925-34521ea004f1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.875822 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.881735 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9f1df1e3-a061-423c-a925-34521ea004f1-config\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.895907 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.915018 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.934214 4998 request.go:752] "Waited before sending request" delay="1.017822382s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-storage-version-migrator-operator/secrets?fieldSelector=metadata.name%3Dserving-cert&limit=500&resourceVersion=0" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.935833 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.955820 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.976961 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:35 crc kubenswrapper[4998]: I1208 18:53:35.996583 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.015636 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.022300 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/7913d656-a73a-4352-bc24-bc8f7be42bfd-images\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.036055 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.045817 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7913d656-a73a-4352-bc24-bc8f7be42bfd-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.074211 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-npz68\" (UniqueName: \"kubernetes.io/projected/7cba21c5-f3df-4d04-83db-2571902f2bff-kube-api-access-npz68\") pod \"apiserver-8596bd845d-mv699\" (UID: \"7cba21c5-f3df-4d04-83db-2571902f2bff\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.089568 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.104580 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qqxm8\" (UniqueName: \"kubernetes.io/projected/0f532410-7407-41fe-b95e-d1a785d4ebfe-kube-api-access-qqxm8\") pod \"downloads-747b44746d-ln56w\" (UID: \"0f532410-7407-41fe-b95e-d1a785d4ebfe\") " pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.127874 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fhfb\" (UniqueName: \"kubernetes.io/projected/2febfd9e-52ad-411f-96d1-50b478dbeaa1-kube-api-access-4fhfb\") pod \"machine-approver-54c688565-5k4zr\" (UID: \"2febfd9e-52ad-411f-96d1-50b478dbeaa1\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.138094 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hr5br\" (UniqueName: \"kubernetes.io/projected/7093463d-f312-4923-b2b6-bdaeac386011-kube-api-access-hr5br\") pod \"cluster-samples-operator-6b564684c8-hg9l2\" (UID: \"7093463d-f312-4923-b2b6-bdaeac386011\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.159006 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 18:53:36 crc kubenswrapper[4998]: E1208 18:53:36.159331 4998 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:36 crc kubenswrapper[4998]: E1208 18:53:36.159423 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls podName:37e64872-8bc7-4ea1-a674-79240aa5c7bf nodeName:}" failed. No retries permitted until 2025-12-08 18:53:36.659391506 +0000 UTC m=+120.307434196 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls") pod "machine-config-controller-f9cdd68f7-7pblj" (UID: "37e64872-8bc7-4ea1-a674-79240aa5c7bf") : failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.163939 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-webhook-certs\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.171353 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcb5g\" (UniqueName: \"kubernetes.io/projected/baa20693-033c-48d7-b6d1-dbe6a846988f-kube-api-access-hcb5g\") pod \"apiserver-9ddfb9f55-5ht5v\" (UID: \"baa20693-033c-48d7-b6d1-dbe6a846988f\") " pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.176485 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.196563 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.216280 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.252354 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.260635 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.275566 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.277028 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.330857 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.332273 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.356098 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.365403 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.376198 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.383169 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.429989 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.433131 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.433262 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.434619 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.461304 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.516077 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.519209 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.519566 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.537123 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-mv699"] Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.537832 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.563148 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.576899 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 18:53:36 crc kubenswrapper[4998]: W1208 18:53:36.605187 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7cba21c5_f3df_4d04_83db_2571902f2bff.slice/crio-0674293932c347681c4ebeef45d39ee93c8af19782984fece961209c9b862331 WatchSource:0}: Error finding container 0674293932c347681c4ebeef45d39ee93c8af19782984fece961209c9b862331: Status 404 returned error can't find the container with id 0674293932c347681c4ebeef45d39ee93c8af19782984fece961209c9b862331 Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.607523 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.616703 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.639261 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.718913 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.724132 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/37e64872-8bc7-4ea1-a674-79240aa5c7bf-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.737478 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.737922 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.738431 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.738340 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.747986 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.756956 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.780578 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.796303 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.813336 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2"] Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.817045 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.837289 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.856417 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.874828 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-5ht5v"] Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.876653 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.917521 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-ln56w"] Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.972777 4998 request.go:752] "Waited before sending request" delay="1.959761238s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/configmaps?fieldSelector=metadata.name%3Dkube-root-ca.crt&limit=500&resourceVersion=0" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.974220 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.976036 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.976088 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.976144 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.977974 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 18:53:36 crc kubenswrapper[4998]: I1208 18:53:36.995851 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.016441 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.036033 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 18:53:37 crc kubenswrapper[4998]: W1208 18:53:37.049309 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0f532410_7407_41fe_b95e_d1a785d4ebfe.slice/crio-ea264b3556c90908db5e9f08099b5c5ec43c965cd873b9a8fbf30f8ab6d14216 WatchSource:0}: Error finding container ea264b3556c90908db5e9f08099b5c5ec43c965cd873b9a8fbf30f8ab6d14216: Status 404 returned error can't find the container with id ea264b3556c90908db5e9f08099b5c5ec43c965cd873b9a8fbf30f8ab6d14216 Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.057608 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.102757 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8gqjw\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-kube-api-access-8gqjw\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.123318 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6pff\" (UniqueName: \"kubernetes.io/projected/a3223550-df04-4846-a030-56e1f6763d0b-kube-api-access-f6pff\") pod \"console-64d44f6ddf-6trs2\" (UID: \"a3223550-df04-4846-a030-56e1f6763d0b\") " pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.135229 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vlhxr\" (UniqueName: \"kubernetes.io/projected/c8df4295-741b-46e6-a5e6-482671155a00-kube-api-access-vlhxr\") pod \"openshift-apiserver-operator-846cbfc458-74msb\" (UID: \"c8df4295-741b-46e6-a5e6-482671155a00\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.151869 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-srmsp\" (UniqueName: \"kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp\") pod \"route-controller-manager-776cdc94d6-rcm4l\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.173718 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.310280 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/26ccddc9-6c93-44fd-a2a6-94c9725cdb6a-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-sj9qw\" (UID: \"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.376459 4998 generic.go:358] "Generic (PLEG): container finished" podID="7cba21c5-f3df-4d04-83db-2571902f2bff" containerID="d37954d89543d2c6b7f12441c34f61f5688380224e9a88fc13ba76b135a6d5db" exitCode=0 Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.389359 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" event={"ID":"7093463d-f312-4923-b2b6-bdaeac386011","Type":"ContainerStarted","Data":"4bc51705866882add2bf70beeb3d4c6e10b5302ae6d07ccdb9cc32bfdbc78568"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.389412 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" event={"ID":"7093463d-f312-4923-b2b6-bdaeac386011","Type":"ContainerStarted","Data":"5fa1f7cb33e6dce4ec03ea1fcbf6e5e1e9770ce170186c050bd19919a3c3d905"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.389425 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" event={"ID":"7cba21c5-f3df-4d04-83db-2571902f2bff","Type":"ContainerDied","Data":"d37954d89543d2c6b7f12441c34f61f5688380224e9a88fc13ba76b135a6d5db"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.389439 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" event={"ID":"7cba21c5-f3df-4d04-83db-2571902f2bff","Type":"ContainerStarted","Data":"0674293932c347681c4ebeef45d39ee93c8af19782984fece961209c9b862331"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.389449 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" event={"ID":"baa20693-033c-48d7-b6d1-dbe6a846988f","Type":"ContainerStarted","Data":"59818cd8b11b152919121834d462d4bf01be932cbf4972ac4df1e5ea7d291fd5"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.390125 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerStarted","Data":"1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.390170 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerStarted","Data":"ea264b3556c90908db5e9f08099b5c5ec43c965cd873b9a8fbf30f8ab6d14216"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.392514 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" event={"ID":"2febfd9e-52ad-411f-96d1-50b478dbeaa1","Type":"ContainerStarted","Data":"4c3eafae19bf6ae45b499ea1f548209ac0340e54c13ab0cd0fe8aa67a58c1e17"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.393310 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.393342 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" event={"ID":"2febfd9e-52ad-411f-96d1-50b478dbeaa1","Type":"ContainerStarted","Data":"ac10495c161dc127e99554513158018f897764839d97d91510c5ba0a937b5cfa"} Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.405266 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.405431 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.433861 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7gh7\" (UniqueName: \"kubernetes.io/projected/7913d656-a73a-4352-bc24-bc8f7be42bfd-kube-api-access-d7gh7\") pod \"machine-config-operator-67c9d58cbb-mm799\" (UID: \"7913d656-a73a-4352-bc24-bc8f7be42bfd\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.497546 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqqvt\" (UniqueName: \"kubernetes.io/projected/bddbccb4-985a-4c6b-8c89-bc36f7cb2db9-kube-api-access-vqqvt\") pod \"multus-admission-controller-69db94689b-28nnk\" (UID: \"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9\") " pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.562601 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wmh9t\" (UniqueName: \"kubernetes.io/projected/37e64872-8bc7-4ea1-a674-79240aa5c7bf-kube-api-access-wmh9t\") pod \"machine-config-controller-f9cdd68f7-7pblj\" (UID: \"37e64872-8bc7-4ea1-a674-79240aa5c7bf\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586457 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586522 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586589 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586631 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586769 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb69d21-7314-46fb-b857-61fa141975a4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.586889 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c62q4\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.587004 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c881d6be-1531-4100-aae2-285bd8863d2a-tmpfs\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.587039 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-config\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.587782 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-serving-cert\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.587814 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.587874 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: E1208 18:53:37.587892 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.087869658 +0000 UTC m=+121.735912548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588287 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-default-certificate\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588333 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b52kx\" (UniqueName: \"kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588362 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588390 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588426 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb69d21-7314-46fb-b857-61fa141975a4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588496 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmrw7\" (UniqueName: \"kubernetes.io/projected/c881d6be-1531-4100-aae2-285bd8863d2a-kube-api-access-bmrw7\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588592 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588620 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnmlg\" (UniqueName: \"kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588645 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588666 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-stats-auth\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588727 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-service-ca-bundle\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588760 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-metrics-certs\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.588864 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-srv-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.662605 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.662856 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.663060 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.663186 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.665641 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.668644 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.681068 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.689908 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690244 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690281 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690330 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690406 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj6cv\" (UniqueName: \"kubernetes.io/projected/7890294d-7049-4cbf-97fa-9903320b19b2-kube-api-access-qj6cv\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690444 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb69d21-7314-46fb-b857-61fa141975a4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690462 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690484 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4zff\" (UniqueName: \"kubernetes.io/projected/fb4e08c6-c41a-411f-8f82-a0e46f10e791-kube-api-access-n4zff\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690513 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c62q4\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690573 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c881d6be-1531-4100-aae2-285bd8863d2a-tmpfs\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690602 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcg9t\" (UniqueName: \"kubernetes.io/projected/d389f524-3928-4915-857d-d54a0f164df8-kube-api-access-wcg9t\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690631 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-config\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690658 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-socket-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690705 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-mountpoint-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690741 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690794 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9qfp\" (UniqueName: \"kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690825 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-serving-cert\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690846 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690866 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cshvt\" (UniqueName: \"kubernetes.io/projected/a8d586ff-12aa-4f23-b256-10ece6c0d728-kube-api-access-cshvt\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690887 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jt2pv\" (UniqueName: \"kubernetes.io/projected/96b56cf4-17af-4d88-8932-c3f613cdd25a-kube-api-access-jt2pv\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.690920 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-plugins-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691483 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691542 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-default-certificate\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691573 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b52kx\" (UniqueName: \"kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691621 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dafdf509-00c5-441e-988a-cc0d6e15d182-tmpfs\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691642 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-apiservice-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.691867 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692496 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692536 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgzhw\" (UniqueName: \"kubernetes.io/projected/30302926-ce83-487e-9f6d-e225ca6bb1ce-kube-api-access-kgzhw\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692562 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692731 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692870 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c881d6be-1531-4100-aae2-285bd8863d2a-tmpfs\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.692997 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6d7s\" (UniqueName: \"kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693640 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693720 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: E1208 18:53:37.693752 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.19372877 +0000 UTC m=+121.841771460 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693792 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb69d21-7314-46fb-b857-61fa141975a4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693820 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmrw7\" (UniqueName: \"kubernetes.io/projected/c881d6be-1531-4100-aae2-285bd8863d2a-kube-api-access-bmrw7\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693839 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-csi-data-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.693988 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694012 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmlg\" (UniqueName: \"kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694032 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694058 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-stats-auth\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694112 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-service-ca-bundle\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694133 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694148 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694164 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbtx7\" (UniqueName: \"kubernetes.io/projected/dafdf509-00c5-441e-988a-cc0d6e15d182-kube-api-access-rbtx7\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694181 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694201 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694231 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-metrics-certs\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694266 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-registration-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694494 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-tmpfs\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694820 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-srv-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694851 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e08c6-c41a-411f-8f82-a0e46f10e791-serving-cert\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694867 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-tmp-dir\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694907 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr7hz\" (UniqueName: \"kubernetes.io/projected/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-kube-api-access-sr7hz\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.694970 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b56cf4-17af-4d88-8932-c3f613cdd25a-cert\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.695040 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.695105 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696013 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-certs\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696235 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696394 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696452 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-webhook-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696523 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7849a053-b0dd-4320-9a46-df76673f332f-signing-key\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696820 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d389f524-3928-4915-857d-d54a0f164df8-tmp-dir\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696884 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-srv-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.696923 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.697004 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kf4bg\" (UniqueName: \"kubernetes.io/projected/7849a053-b0dd-4320-9a46-df76673f332f-kube-api-access-kf4bg\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.698190 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.731044 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.735480 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.757268 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.777466 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.795517 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799091 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e08c6-c41a-411f-8f82-a0e46f10e791-serving-cert\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799148 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sr7hz\" (UniqueName: \"kubernetes.io/projected/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-kube-api-access-sr7hz\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799196 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b56cf4-17af-4d88-8932-c3f613cdd25a-cert\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799238 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799360 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-certs\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799403 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-webhook-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799442 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7849a053-b0dd-4320-9a46-df76673f332f-signing-key\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799493 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d389f524-3928-4915-857d-d54a0f164df8-tmp-dir\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799516 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-srv-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799532 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799556 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kf4bg\" (UniqueName: \"kubernetes.io/projected/7849a053-b0dd-4320-9a46-df76673f332f-kube-api-access-kf4bg\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799589 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799602 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799657 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799762 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799815 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj6cv\" (UniqueName: \"kubernetes.io/projected/7890294d-7049-4cbf-97fa-9903320b19b2-kube-api-access-qj6cv\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799893 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799923 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n4zff\" (UniqueName: \"kubernetes.io/projected/fb4e08c6-c41a-411f-8f82-a0e46f10e791-kube-api-access-n4zff\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.799982 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wcg9t\" (UniqueName: \"kubernetes.io/projected/d389f524-3928-4915-857d-d54a0f164df8-kube-api-access-wcg9t\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800043 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-socket-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800045 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/d389f524-3928-4915-857d-d54a0f164df8-tmp-dir\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: E1208 18:53:37.800063 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.300048916 +0000 UTC m=+121.948091606 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800274 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-mountpoint-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800325 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800363 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b9qfp\" (UniqueName: \"kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800405 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cshvt\" (UniqueName: \"kubernetes.io/projected/a8d586ff-12aa-4f23-b256-10ece6c0d728-kube-api-access-cshvt\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800447 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800456 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jt2pv\" (UniqueName: \"kubernetes.io/projected/96b56cf4-17af-4d88-8932-c3f613cdd25a-kube-api-access-jt2pv\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800525 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-plugins-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800589 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dafdf509-00c5-441e-988a-cc0d6e15d182-tmpfs\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800624 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-apiservice-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800681 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kgzhw\" (UniqueName: \"kubernetes.io/projected/30302926-ce83-487e-9f6d-e225ca6bb1ce-kube-api-access-kgzhw\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800718 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800736 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800764 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j6d7s\" (UniqueName: \"kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800784 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-csi-data-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800837 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-mountpoint-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800845 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800891 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800930 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rbtx7\" (UniqueName: \"kubernetes.io/projected/dafdf509-00c5-441e-988a-cc0d6e15d182-kube-api-access-rbtx7\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800960 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.800994 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.801043 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-registration-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.801139 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-tmpfs\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.801198 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-csi-data-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.801735 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-registration-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.801822 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-plugins-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.802151 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-tmpfs\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.802236 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7890294d-7049-4cbf-97fa-9903320b19b2-socket-dir\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.802372 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dafdf509-00c5-441e-988a-cc0d6e15d182-tmpfs\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.815505 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.825303 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.839053 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.864471 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.865150 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.881286 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.898544 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.902150 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:37 crc kubenswrapper[4998]: E1208 18:53:37.902759 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.402741724 +0000 UTC m=+122.050784414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.905847 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0f9476ef-85e4-4a1e-85a8-a225eb1e6552-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-rvjkz\" (UID: \"0f9476ef-85e4-4a1e-85a8-a225eb1e6552\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:37 crc kubenswrapper[4998]: I1208 18:53:37.917711 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.004611 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.004993 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.50497872 +0000 UTC m=+122.153021410 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.037068 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.037514 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.037671 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.037866 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.038470 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.039238 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.058383 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.060222 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.065534 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9f1df1e3-a061-423c-a925-34521ea004f1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-mcp9s\" (UID: \"9f1df1e3-a061-423c-a925-34521ea004f1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.068380 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.076432 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.080354 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2gtq\" (UniqueName: \"kubernetes.io/projected/8589a8bc-c86d-4caf-96c4-33c5540d6b5e-kube-api-access-f2gtq\") pod \"dns-operator-799b87ffcd-m9nr7\" (UID: \"8589a8bc-c86d-4caf-96c4-33c5540d6b5e\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.082590 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kln6j\" (UniqueName: \"kubernetes.io/projected/b7afbd75-761b-4b21-832f-8aeba8f7802f-kube-api-access-kln6j\") pod \"openshift-config-operator-5777786469-cz726\" (UID: \"b7afbd75-761b-4b21-832f-8aeba8f7802f\") " pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.108201 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.109162 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.609114997 +0000 UTC m=+122.257157697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.142491 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjhh6\" (UniqueName: \"kubernetes.io/projected/d421eb48-b80b-4051-be83-d48b866bc67b-kube-api-access-vjhh6\") pod \"authentication-operator-7f5c659b84-ljt5q\" (UID: \"d421eb48-b80b-4051-be83-d48b866bc67b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.148824 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.150448 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c62q4\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.151733 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-config\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.162636 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.179883 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.186150 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-serving-cert\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.189278 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.195620 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.211490 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.212185 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.712163534 +0000 UTC m=+122.360206224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.217453 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-default-certificate\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.221485 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.225937 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1cb69d21-7314-46fb-b857-61fa141975a4-config\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.227737 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb"] Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.282020 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.299975 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.300708 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.302759 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-profile-collector-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.305624 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.313423 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.317108 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-metrics-certs\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.317532 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.817513403 +0000 UTC m=+122.465556093 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.317599 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.318101 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.818082619 +0000 UTC m=+122.466125309 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.324441 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.328443 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.344089 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.350552 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c881d6be-1531-4100-aae2-285bd8863d2a-srv-cert\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.355963 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.358136 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-service-ca-bundle\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.380504 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.390666 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1cb69d21-7314-46fb-b857-61fa141975a4-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.473428 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.473881 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:38.973856427 +0000 UTC m=+122.621899117 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.481815 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.482026 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.483939 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.493653 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6vxg\" (UniqueName: \"kubernetes.io/projected/dece7b61-e8f7-47c4-8b60-2c032a8bb0d1-kube-api-access-g6vxg\") pod \"etcd-operator-69b85846b6-m2dl6\" (UID: \"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.495529 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.501473 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xq25\" (UniqueName: \"kubernetes.io/projected/4326b45d-f585-474b-8899-ed8c604ca68e-kube-api-access-6xq25\") pod \"ingress-operator-6b9cb4dbcf-6jn5w\" (UID: \"4326b45d-f585-474b-8899-ed8c604ca68e\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.638534 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c82mt\" (UniqueName: \"kubernetes.io/projected/e5196d8a-8e2f-4e51-8c30-0553f127a401-kube-api-access-c82mt\") pod \"openshift-controller-manager-operator-686468bdd5-72tbb\" (UID: \"e5196d8a-8e2f-4e51-8c30-0553f127a401\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.639477 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-stats-auth\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.640418 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" event={"ID":"c8df4295-741b-46e6-a5e6-482671155a00","Type":"ContainerStarted","Data":"a07003f8f76e4c426dc26105090a4f7e7a2f17312ffce62f560b8e8e479cfd24"} Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.642247 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.642405 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.642582 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" event={"ID":"7093463d-f312-4923-b2b6-bdaeac386011","Type":"ContainerStarted","Data":"407ded4ef7525eb5ed529515b899f86f40bc60af8c16bafa1170351102740832"} Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.642424 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.642780 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.647081 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.647450 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.647569 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.647934 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.147922411 +0000 UTC m=+122.795965101 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.648594 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" event={"ID":"7cba21c5-f3df-4d04-83db-2571902f2bff","Type":"ContainerStarted","Data":"59ad860c062968cc3536cc3e4cbf4fa6b1bd6c1b1ef1158dbd67d53976f8ec62"} Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.678084 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-6trs2"] Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.682874 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/96b56cf4-17af-4d88-8932-c3f613cdd25a-cert\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.683212 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.683535 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kv2pr\" (UniqueName: \"kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr\") pod \"oauth-openshift-66458b6674-ncn97\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.684006 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4e08c6-c41a-411f-8f82-a0e46f10e791-serving-cert\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.684197 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.684730 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6q2c\" (UniqueName: \"kubernetes.io/projected/e7ea19de-84b9-47ec-8e9b-036995d15ea6-kube-api-access-n6q2c\") pod \"console-operator-67c89758df-chjpp\" (UID: \"e7ea19de-84b9-47ec-8e9b-036995d15ea6\") " pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.685791 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-certs\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.686266 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-webhook-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.688246 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dafdf509-00c5-441e-988a-cc0d6e15d182-apiservice-cert\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.690051 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/7849a053-b0dd-4320-9a46-df76673f332f-signing-key\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.696601 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.697346 4998 generic.go:358] "Generic (PLEG): container finished" podID="baa20693-033c-48d7-b6d1-dbe6a846988f" containerID="da8c87febdc5ff70540511e0fa5bd4b61e557f9ad2077b505638ffa3e7367b14" exitCode=0 Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.697489 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" event={"ID":"baa20693-033c-48d7-b6d1-dbe6a846988f","Type":"ContainerDied","Data":"da8c87febdc5ff70540511e0fa5bd4b61e557f9ad2077b505638ffa3e7367b14"} Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.709841 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" event={"ID":"2febfd9e-52ad-411f-96d1-50b478dbeaa1","Type":"ContainerStarted","Data":"c32ff901202a23175ad85f5b421662874a5b1590aaffa43b6255c3d28d8ba82f"} Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.711304 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.716229 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.723876 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.730756 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-srv-cert\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.731403 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.751130 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.755204 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.255170691 +0000 UTC m=+122.903213381 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.790963 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw"] Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803143 4998 configmap.go:193] Couldn't get configMap openshift-operator-lifecycle-manager/collect-profiles-config: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803530 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume podName:7d5886c2-9e1f-4792-a2c7-2194ea628db9 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.303508875 +0000 UTC m=+122.951551565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume") pod "collect-profiles-29420325-5mflt" (UID: "7d5886c2-9e1f-4792-a2c7-2194ea628db9") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803172 4998 secret.go:189] Couldn't get secret openshift-machine-config-operator/node-bootstrapper-token: failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803575 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token podName:a8d586ff-12aa-4f23-b256-10ece6c0d728 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.303563467 +0000 UTC m=+122.951606157 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-bootstrap-token" (UniqueName: "kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token") pod "machine-config-server-lrwkb" (UID: "a8d586ff-12aa-4f23-b256-10ece6c0d728") : failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803182 4998 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803602 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert podName:30302926-ce83-487e-9f6d-e225ca6bb1ce nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.303597668 +0000 UTC m=+122.951640348 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert") pod "package-server-manager-77f986bd66-665bn" (UID: "30302926-ce83-487e-9f6d-e225ca6bb1ce") : failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803201 4998 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803630 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle podName:7849a053-b0dd-4320-9a46-df76673f332f nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.303624799 +0000 UTC m=+122.951667489 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle") pod "service-ca-74545575db-fct7l" (UID: "7849a053-b0dd-4320-9a46-df76673f332f") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803201 4998 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803664 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config podName:fb4e08c6-c41a-411f-8f82-a0e46f10e791 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.30364927 +0000 UTC m=+122.951691960 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config") pod "service-ca-operator-5b9c976747-p6btr" (UID: "fb4e08c6-c41a-411f-8f82-a0e46f10e791") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803215 4998 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.805426 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls podName:d389f524-3928-4915-857d-d54a0f164df8 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.305415956 +0000 UTC m=+122.953458646 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls") pod "dns-default-l7dsk" (UID: "d389f524-3928-4915-857d-d54a0f164df8") : failed to sync secret cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.803224 4998 configmap.go:193] Couldn't get configMap openshift-dns/dns-default: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.805481 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume podName:d389f524-3928-4915-857d-d54a0f164df8 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.305474068 +0000 UTC m=+122.953516758 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume") pod "dns-default-l7dsk" (UID: "d389f524-3928-4915-857d-d54a0f164df8") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807224 4998 projected.go:289] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807252 4998 projected.go:194] Error preparing data for projected volume kube-api-access-tksmm for pod openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807281 4998 projected.go:289] Couldn't get configMap openshift-machine-api/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807321 4998 projected.go:194] Error preparing data for projected volume kube-api-access-cn42v for pod openshift-machine-api/machine-api-operator-755bb95488-8f7f5: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807289 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm podName:cf0f12dc-bfab-4351-8a07-9d6636b102af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.307280026 +0000 UTC m=+122.955322716 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tksmm" (UniqueName: "kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm") pod "control-plane-machine-set-operator-75ffdb6fcd-nxrnn" (UID: "cf0f12dc-bfab-4351-8a07-9d6636b102af") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.807449 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v podName:77fa53a8-054b-49f0-8892-3a31f83195cb nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.307420599 +0000 UTC m=+122.955463309 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cn42v" (UniqueName: "kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v") pod "machine-api-operator-755bb95488-8f7f5" (UID: "77fa53a8-054b-49f0-8892-3a31f83195cb") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.813837 4998 request.go:752] "Waited before sending request" delay="1.012789998s" reason="client-side throttling, not priority and fairness" verb="POST" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-operator-lifecycle-manager/serviceaccounts/olm-operator-serviceaccount/token" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.841905 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcg9t\" (UniqueName: \"kubernetes.io/projected/d389f524-3928-4915-857d-d54a0f164df8-kube-api-access-wcg9t\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.852080 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.853448 4998 projected.go:289] Couldn't get configMap openshift-marketplace/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.853468 4998 projected.go:194] Error preparing data for projected volume kube-api-access-bwlss for pod openshift-marketplace/marketplace-operator-547dbd544d-pv6bl: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.853519 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss podName:d5cb67e5-9aca-42f2-8034-6d97ea435de5 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.353499813 +0000 UTC m=+123.001542503 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bwlss" (UniqueName: "kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss") pod "marketplace-operator-547dbd544d-pv6bl" (UID: "d5cb67e5-9aca-42f2-8034-6d97ea435de5") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.855227 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.860162 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.861557 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.361539437 +0000 UTC m=+123.009582127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.889645 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.903028 4998 projected.go:289] Couldn't get configMap openshift-kube-storage-version-migrator/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.903087 4998 projected.go:194] Error preparing data for projected volume kube-api-access-6fzpv for pod openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.903180 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv podName:c7dd20c1-8265-4368-b118-a4a19d492af7 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.403155773 +0000 UTC m=+123.051198463 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-6fzpv" (UniqueName: "kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv") pod "migrator-866fcbc849-fdmn8" (UID: "c7dd20c1-8265-4368-b118-a4a19d492af7") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.906520 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.910543 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799"] Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.954617 4998 projected.go:289] Couldn't get configMap openshift-controller-manager/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.954665 4998 projected.go:194] Error preparing data for projected volume kube-api-access-f2fw5 for pod openshift-controller-manager/controller-manager-65b6cccf98-8w69c: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.954770 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5 podName:f8f4cca3-5c94-40ee-9566-f0d0bf09adc7 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.454744313 +0000 UTC m=+123.102787003 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-f2fw5" (UniqueName: "kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5") pod "controller-manager-65b6cccf98-8w69c" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.958600 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.958910 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.959079 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.962014 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.962489 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.462466789 +0000 UTC m=+123.110509479 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:38 crc kubenswrapper[4998]: I1208 18:53:38.962926 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:38 crc kubenswrapper[4998]: E1208 18:53:38.963276 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.46326697 +0000 UTC m=+123.111309670 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.026491 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.030386 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cshvt\" (UniqueName: \"kubernetes.io/projected/a8d586ff-12aa-4f23-b256-10ece6c0d728-kube-api-access-cshvt\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.038089 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43390: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.060341 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.062492 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9qfp\" (UniqueName: \"kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp\") pod \"cni-sysctl-allowlist-ds-zmsq7\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.062548 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj"] Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.071224 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.071970 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.571900676 +0000 UTC m=+123.219943366 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.077904 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.095841 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.100332 4998 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.100495 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.106126 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-28nnk"] Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.108540 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43402: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.114423 4998 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.114481 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.116163 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.156483 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.158328 4998 projected.go:289] Couldn't get configMap openshift-ingress/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.172957 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.173500 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.673487395 +0000 UTC m=+123.321530085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.183393 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.192514 4998 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.192582 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.213189 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.213435 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43408: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.216351 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.235716 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.284491 4998 kubelet_pods.go:1019] "Unable to retrieve pull secret, the image pull may not succeed." pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" secret="" err="failed to sync secret cache: timed out waiting for the condition" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.284589 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.285286 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.285532 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.785504432 +0000 UTC m=+123.433547122 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.285853 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.285934 4998 projected.go:289] Couldn't get configMap openshift-kube-storage-version-migrator-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.286264 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.286292 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.786284872 +0000 UTC m=+123.434327562 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.291457 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.292449 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.295062 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.305849 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.313578 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43422: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.315735 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.318640 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.337285 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.346169 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.357400 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.359128 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.375466 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.387518 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.387815 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tksmm\" (UniqueName: \"kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.387863 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.387966 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.887937313 +0000 UTC m=+123.535980003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388030 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388066 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388174 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388208 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388235 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388260 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388318 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388473 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cn42v\" (UniqueName: \"kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388500 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.388792 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b6f55ae2-a8ab-42e7-906f-142a30fa8c07-kube-api-access\") pod \"kube-apiserver-operator-575994946d-pjb8j\" (UID: \"b6f55ae2-a8ab-42e7-906f-142a30fa8c07\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.388805 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.888795505 +0000 UTC m=+123.536838185 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.389440 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4e08c6-c41a-411f-8f82-a0e46f10e791-config\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.389506 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.391011 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d389f524-3928-4915-857d-d54a0f164df8-config-volume\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.396428 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.402018 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/7849a053-b0dd-4320-9a46-df76673f332f-signing-cabundle\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.406449 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tksmm\" (UniqueName: \"kubernetes.io/projected/cf0f12dc-bfab-4351-8a07-9d6636b102af-kube-api-access-tksmm\") pod \"control-plane-machine-set-operator-75ffdb6fcd-nxrnn\" (UID: \"cf0f12dc-bfab-4351-8a07-9d6636b102af\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.407331 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/a8d586ff-12aa-4f23-b256-10ece6c0d728-node-bootstrap-token\") pod \"machine-config-server-lrwkb\" (UID: \"a8d586ff-12aa-4f23-b256-10ece6c0d728\") " pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.409064 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/30302926-ce83-487e-9f6d-e225ca6bb1ce-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.417481 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/d389f524-3928-4915-857d-d54a0f164df8-metrics-tls\") pod \"dns-default-l7dsk\" (UID: \"d389f524-3928-4915-857d-d54a0f164df8\") " pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.417611 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.422078 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") pod \"marketplace-operator-547dbd544d-pv6bl\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.424784 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43438: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.432635 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn42v\" (UniqueName: \"kubernetes.io/projected/77fa53a8-054b-49f0-8892-3a31f83195cb-kube-api-access-cn42v\") pod \"machine-api-operator-755bb95488-8f7f5\" (UID: \"77fa53a8-054b-49f0-8892-3a31f83195cb\") " pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.455554 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.457043 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.459301 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.478661 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.490390 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.490942 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.990901858 +0000 UTC m=+123.638944548 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.493673 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.493885 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.494021 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6fzpv\" (UniqueName: \"kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv\") pod \"migrator-866fcbc849-fdmn8\" (UID: \"c7dd20c1-8265-4368-b118-a4a19d492af7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.495310 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:39.995288435 +0000 UTC m=+123.643331125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.499928 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.507077 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.514422 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fzpv\" (UniqueName: \"kubernetes.io/projected/c7dd20c1-8265-4368-b118-a4a19d492af7-kube-api-access-6fzpv\") pod \"migrator-866fcbc849-fdmn8\" (UID: \"c7dd20c1-8265-4368-b118-a4a19d492af7\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.521113 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43444: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.522351 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.526764 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") pod \"controller-manager-65b6cccf98-8w69c\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.532787 4998 projected.go:194] Error preparing data for projected volume kube-api-access-b52kx for pod openshift-ingress/router-default-68cf44c8b8-hnz2f: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.532931 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx podName:d6c8acb6-a7a7-4d46-9e9c-35018e8287ed nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.032902254 +0000 UTC m=+123.680944944 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-b52kx" (UniqueName: "kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx") pod "router-default-68cf44c8b8-hnz2f" (UID: "d6c8acb6-a7a7-4d46-9e9c-35018e8287ed") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.559026 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.571712 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6d7s\" (UniqueName: \"kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s\") pod \"collect-profiles-29420325-5mflt\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.576586 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rbtx7\" (UniqueName: \"kubernetes.io/projected/dafdf509-00c5-441e-988a-cc0d6e15d182-kube-api-access-rbtx7\") pod \"packageserver-7d4fc7d867-sdn44\" (UID: \"dafdf509-00c5-441e-988a-cc0d6e15d182\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.579209 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.583206 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sr7hz\" (UniqueName: \"kubernetes.io/projected/f7ba83f0-26ea-43d7-bcba-b03f02e2c98f-kube-api-access-sr7hz\") pod \"olm-operator-5cdf44d969-g9mjg\" (UID: \"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.587636 4998 projected.go:194] Error preparing data for projected volume kube-api-access-bnmlg for pod openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl: failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.587770 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg podName:1cb69d21-7314-46fb-b857-61fa141975a4 nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.087743802 +0000 UTC m=+123.735786492 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-bnmlg" (UniqueName: "kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg") pod "kube-storage-version-migrator-operator-565b79b866-zsgbl" (UID: "1cb69d21-7314-46fb-b857-61fa141975a4") : failed to sync configmap cache: timed out waiting for the condition Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.588806 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmrw7\" (UniqueName: \"kubernetes.io/projected/c881d6be-1531-4100-aae2-285bd8863d2a-kube-api-access-bmrw7\") pod \"catalog-operator-75ff9f647d-5r5cb\" (UID: \"c881d6be-1531-4100-aae2-285bd8863d2a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.595458 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.595861 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgzhw\" (UniqueName: \"kubernetes.io/projected/30302926-ce83-487e-9f6d-e225ca6bb1ce-kube-api-access-kgzhw\") pod \"package-server-manager-77f986bd66-665bn\" (UID: \"30302926-ce83-487e-9f6d-e225ca6bb1ce\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.596207 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.096188586 +0000 UTC m=+123.744231276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.597630 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.604895 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kf4bg\" (UniqueName: \"kubernetes.io/projected/7849a053-b0dd-4320-9a46-df76673f332f-kube-api-access-kf4bg\") pod \"service-ca-74545575db-fct7l\" (UID: \"7849a053-b0dd-4320-9a46-df76673f332f\") " pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.619798 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43452: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.626277 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.646840 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.656125 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.656419 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.699969 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.699992 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.700370 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.200358883 +0000 UTC m=+123.848401573 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.702616 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.705559 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj6cv\" (UniqueName: \"kubernetes.io/projected/7890294d-7049-4cbf-97fa-9903320b19b2-kube-api-access-qj6cv\") pod \"csi-hostpathplugin-2h985\" (UID: \"7890294d-7049-4cbf-97fa-9903320b19b2\") " pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.709754 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.721937 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4zff\" (UniqueName: \"kubernetes.io/projected/fb4e08c6-c41a-411f-8f82-a0e46f10e791-kube-api-access-n4zff\") pod \"service-ca-operator-5b9c976747-p6btr\" (UID: \"fb4e08c6-c41a-411f-8f82-a0e46f10e791\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.740492 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.757142 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-lrwkb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.787820 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jt2pv\" (UniqueName: \"kubernetes.io/projected/96b56cf4-17af-4d88-8932-c3f613cdd25a-kube-api-access-jt2pv\") pod \"ingress-canary-n2lzg\" (UID: \"96b56cf4-17af-4d88-8932-c3f613cdd25a\") " pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.788676 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.793034 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.793236 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.797423 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.802123 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.806647 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.807645 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.808077 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.308042014 +0000 UTC m=+123.956084704 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.809457 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43458: no serving certificate available for the kubelet" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.820530 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.823879 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.875179 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.882255 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.882741 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.882903 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.883178 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.887332 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.887997 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.893215 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-m9nr7"] Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.902166 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.905851 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" event={"ID":"c8df4295-741b-46e6-a5e6-482671155a00","Type":"ContainerStarted","Data":"ba1bf3f00cd4c82a04379379d04f300ec1a7cec31a0d813b53cd200c37c6153c"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.911100 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:39 crc kubenswrapper[4998]: E1208 18:53:39.911744 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.411715229 +0000 UTC m=+124.059757919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.912144 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-fct7l" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.922917 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" event={"ID":"37e64872-8bc7-4ea1-a674-79240aa5c7bf","Type":"ContainerStarted","Data":"6bc04de5937f7ac5b848c07ad8f3b29faf2bfd474b481591018e779ceaec0c30"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.923038 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" event={"ID":"37e64872-8bc7-4ea1-a674-79240aa5c7bf","Type":"ContainerStarted","Data":"c4596cffd5c9e4b91db6aad7090c88554cdb558eb2056acb3c8d7d1737c726b0"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.937191 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" event={"ID":"baa20693-033c-48d7-b6d1-dbe6a846988f","Type":"ContainerStarted","Data":"58cd09c75e3a23063bc82a5382dd430388ca57cca550306265c5f441162068ca"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.942052 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" event={"ID":"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9","Type":"ContainerStarted","Data":"b5a3b5165ee5ad6d1b3a8d00d42ba65c3204031d0399b3848020ae9a340a02c5"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.943452 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" event={"ID":"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a","Type":"ContainerStarted","Data":"74bfa36cdc00d7399d2985cdbddafc1abc6f12ae579ba0f5f821b9807c701080"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.943505 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" event={"ID":"26ccddc9-6c93-44fd-a2a6-94c9725cdb6a","Type":"ContainerStarted","Data":"f2def0db31bdd5434b7c2fbb70170a924732377d986b2262f1ca6a70cda02a0e"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.944870 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-6trs2" event={"ID":"a3223550-df04-4846-a030-56e1f6763d0b","Type":"ContainerStarted","Data":"03cec2c6c59caf1706fb62fa1f22a3fb6863d8dfbac2f428d46c98a5905c95de"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.944908 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-6trs2" event={"ID":"a3223550-df04-4846-a030-56e1f6763d0b","Type":"ContainerStarted","Data":"fa0601573f6f46b944b1e54dc4a91965bfdd2fe8a6d32d86ef51810d5e89fcf3"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.946185 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" event={"ID":"7913d656-a73a-4352-bc24-bc8f7be42bfd","Type":"ContainerStarted","Data":"4b8f001ca99caa4dea70dbf658d7062b8c9203b14f321edd7a4434badd870a03"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.948004 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" event={"ID":"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f","Type":"ContainerStarted","Data":"01104f18717f4eac438036f0af17cd5eeeaf9b0d6b6190794d57738b30d5a02d"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.948039 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" event={"ID":"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f","Type":"ContainerStarted","Data":"8c5101a50fb1a72786392a48ad513f9ff24742105e88c7896305daf4c0a06af6"} Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.948183 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.953075 4998 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rcm4l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 08 18:53:39 crc kubenswrapper[4998]: I1208 18:53:39.953325 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.056864 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.058371 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.058595 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.058952 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.062191 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.062527 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b52kx\" (UniqueName: \"kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.063340 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.063486 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.063576 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-2h985" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.063607 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-n2lzg" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.063934 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.563914293 +0000 UTC m=+124.211956983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.129836 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b52kx\" (UniqueName: \"kubernetes.io/projected/d6c8acb6-a7a7-4d46-9e9c-35018e8287ed-kube-api-access-b52kx\") pod \"router-default-68cf44c8b8-hnz2f\" (UID: \"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed\") " pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.218939 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.219203 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnmlg\" (UniqueName: \"kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.220506 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.720475961 +0000 UTC m=+124.368518651 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.239538 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnmlg\" (UniqueName: \"kubernetes.io/projected/1cb69d21-7314-46fb-b857-61fa141975a4-kube-api-access-bnmlg\") pod \"kube-storage-version-migrator-operator-565b79b866-zsgbl\" (UID: \"1cb69d21-7314-46fb-b857-61fa141975a4\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.312389 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.320624 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.322933 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.323076 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.823053417 +0000 UTC m=+124.471096107 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.323363 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.323793 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.823783846 +0000 UTC m=+124.471826536 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.376031 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.383657 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.429665 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.430211 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:40.930189693 +0000 UTC m=+124.578232393 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.537469 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.537869 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.037853283 +0000 UTC m=+124.685895973 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.624990 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43470: no serving certificate available for the kubelet" Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.658282 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.658899 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.15887439 +0000 UTC m=+124.806917080 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.760703 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.761131 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.261109276 +0000 UTC m=+124.909151966 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.869194 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.869953 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.369931116 +0000 UTC m=+125.017973816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:40 crc kubenswrapper[4998]: I1208 18:53:40.974193 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:40 crc kubenswrapper[4998]: E1208 18:53:40.974958 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.474944756 +0000 UTC m=+125.122987446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.078303 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.078831 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.578811866 +0000 UTC m=+125.226854566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.081028 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz"] Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.147150 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.147217 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.181879 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" event={"ID":"7913d656-a73a-4352-bc24-bc8f7be42bfd","Type":"ContainerStarted","Data":"6d3d3181c87609d1fa3df39020b00acfe7c52836c851ca22e0340c34653231c3"} Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.189358 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.190615 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.690600306 +0000 UTC m=+125.338642996 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.453307 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.463242 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" event={"ID":"8589a8bc-c86d-4caf-96c4-33c5540d6b5e","Type":"ContainerStarted","Data":"413b12d9efa8d448748e948d0f52af22fd0055b780c22cf82076c14c5c2fd09a"} Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.485611 4998 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rcm4l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.485679 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.499398 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:41.999357299 +0000 UTC m=+125.647399989 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.596287 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" event={"ID":"2e0a5cfb-edf8-471a-a968-dbc68e8639fb","Type":"ContainerStarted","Data":"7e23435211092787427d76b34dad2d82ca7cf406d8d0a00f4b39ac487ccc75c6"} Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.599733 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" podStartSLOduration=96.599712936 podStartE2EDuration="1m36.599712936s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:41.476636596 +0000 UTC m=+125.124679286" watchObservedRunningTime="2025-12-08 18:53:41.599712936 +0000 UTC m=+125.247755656" Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.618141 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s"] Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.695125 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q"] Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.699128 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.703004 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.202985729 +0000 UTC m=+125.851028419 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.730150 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-ln56w" podStartSLOduration=97.73010071 podStartE2EDuration="1m37.73010071s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:41.725082396 +0000 UTC m=+125.373125086" watchObservedRunningTime="2025-12-08 18:53:41.73010071 +0000 UTC m=+125.378143410" Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.820960 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.821304 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.321286062 +0000 UTC m=+125.969328752 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.898306 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-74msb" podStartSLOduration=97.898287808 podStartE2EDuration="1m37.898287808s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:41.897717512 +0000 UTC m=+125.545760202" watchObservedRunningTime="2025-12-08 18:53:41.898287808 +0000 UTC m=+125.546330508" Dec 08 18:53:41 crc kubenswrapper[4998]: I1208 18:53:41.922261 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:41 crc kubenswrapper[4998]: E1208 18:53:41.922775 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.422757568 +0000 UTC m=+126.070800258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.023419 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.023809 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.523793152 +0000 UTC m=+126.171835842 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.122004 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43484: no serving certificate available for the kubelet" Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.126584 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.126907 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.626894462 +0000 UTC m=+126.274937152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.147976 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.190180 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w"] Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.232547 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.232872 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.732844597 +0000 UTC m=+126.380887287 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.233308 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.233627 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.733618157 +0000 UTC m=+126.381660847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.355807 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.356361 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.856340507 +0000 UTC m=+126.504383197 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.459445 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.459859 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:42.959845898 +0000 UTC m=+126.607888588 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.525414 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-6trs2" podStartSLOduration=98.525389949 podStartE2EDuration="1m38.525389949s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:42.52014652 +0000 UTC m=+126.168189200" watchObservedRunningTime="2025-12-08 18:53:42.525389949 +0000 UTC m=+126.173432639" Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.560879 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.561216 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.06119971 +0000 UTC m=+126.709242400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.576516 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" event={"ID":"d421eb48-b80b-4051-be83-d48b866bc67b","Type":"ContainerStarted","Data":"087af078cf8de22ec241cee64c602c10ee0a57906b7489b728e611d2584737fb"} Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.664710 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.665010 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.164998238 +0000 UTC m=+126.813040928 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.709080 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" event={"ID":"9f1df1e3-a061-423c-a925-34521ea004f1","Type":"ContainerStarted","Data":"ec910a9cc9d4d014c12cf2d03ca7cb3bb884e3f9c7d1ae52715ba19e2f30875b"} Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.765497 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.765819 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.265801286 +0000 UTC m=+126.913843976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.775429 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" event={"ID":"0f9476ef-85e4-4a1e-85a8-a225eb1e6552","Type":"ContainerStarted","Data":"71afd06470d20651c37ea939e3ee51dcf805c6243e83b0e6c03e945be7522b7f"} Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.797682 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lrwkb" event={"ID":"a8d586ff-12aa-4f23-b256-10ece6c0d728","Type":"ContainerStarted","Data":"f8385dee9cf118eb3b8dba695441e7a0128d566e8145438f070d166b2a0bee30"} Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.916772 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:42 crc kubenswrapper[4998]: E1208 18:53:42.917123 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.417111646 +0000 UTC m=+127.065154336 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:42 crc kubenswrapper[4998]: I1208 18:53:42.968755 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" event={"ID":"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9","Type":"ContainerStarted","Data":"252ebf6db14cba4b3cb8ea10ee40dc89ffd3441d532e3e8bd7796ef3f956481b"} Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.017474 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.018008 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.517988037 +0000 UTC m=+127.166030727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.075409 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-hg9l2" podStartSLOduration=99.075393382 podStartE2EDuration="1m39.075393382s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:43.073556913 +0000 UTC m=+126.721599603" watchObservedRunningTime="2025-12-08 18:53:43.075393382 +0000 UTC m=+126.723436072" Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.112153 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" event={"ID":"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed","Type":"ContainerStarted","Data":"b4e8ec58a29e5ccc0b07e1f7bc4b3d3aa21d2ac2659ba1d133883a2ee6f26c48"} Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.118922 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.119296 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.619284308 +0000 UTC m=+127.267326998 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.136621 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.189665 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-5k4zr" podStartSLOduration=99.189646537 podStartE2EDuration="1m39.189646537s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:43.188448975 +0000 UTC m=+126.836491655" watchObservedRunningTime="2025-12-08 18:53:43.189646537 +0000 UTC m=+126.837689217" Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.220264 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.221481 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.721462982 +0000 UTC m=+127.369505672 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.323856 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.402586 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.902548774 +0000 UTC m=+127.550591464 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.446609 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.449396 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:43.949361038 +0000 UTC m=+127.597403728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.556072 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.556857 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.056836533 +0000 UTC m=+127.704879223 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.662050 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.662796 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.162771028 +0000 UTC m=+127.810813718 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.761988 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.772629 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.773218 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.273199522 +0000 UTC m=+127.921242212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.793167 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-cz726"] Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.874171 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.874729 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.374668297 +0000 UTC m=+128.022710987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.904321 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6"] Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.928891 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb"] Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.970581 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8"] Dec 08 18:53:43 crc kubenswrapper[4998]: I1208 18:53:43.975620 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:43 crc kubenswrapper[4998]: E1208 18:53:43.975983 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.475969319 +0000 UTC m=+128.124012009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.079233 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.079704 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.579654393 +0000 UTC m=+128.227697083 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.091900 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-mv699" podStartSLOduration=99.091880698 podStartE2EDuration="1m39.091880698s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:44.089530506 +0000 UTC m=+127.737573196" watchObservedRunningTime="2025-12-08 18:53:44.091880698 +0000 UTC m=+127.739923378" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.136306 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-sj9qw" podStartSLOduration=100.136232686 podStartE2EDuration="1m40.136232686s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:44.122726608 +0000 UTC m=+127.770769298" watchObservedRunningTime="2025-12-08 18:53:44.136232686 +0000 UTC m=+127.784275376" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.181672 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.182155 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.682136166 +0000 UTC m=+128.330178856 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.225791 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-chjpp"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.258393 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-8f7f5"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.285142 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.285638 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.785621646 +0000 UTC m=+128.433664336 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.366812 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.387202 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.387653 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:44.887635996 +0000 UTC m=+128.535678686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.396487 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.411105 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-l7dsk"] Dec 08 18:53:44 crc kubenswrapper[4998]: W1208 18:53:44.433952 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7ea19de_84b9_47ec_8e9b_036995d15ea6.slice/crio-5c966d25960becbe05ec62f7302a97b933266b3a1f0646991de2010e89a73fa6 WatchSource:0}: Error finding container 5c966d25960becbe05ec62f7302a97b933266b3a1f0646991de2010e89a73fa6: Status 404 returned error can't find the container with id 5c966d25960becbe05ec62f7302a97b933266b3a1f0646991de2010e89a73fa6 Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.501598 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.503790 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.003759362 +0000 UTC m=+128.651802062 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.592298 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.614079 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.614475 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.114462713 +0000 UTC m=+128.762505403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.649612 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.650015 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.664078 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.699765 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" event={"ID":"b7afbd75-761b-4b21-832f-8aeba8f7802f","Type":"ContainerStarted","Data":"307fadf31acfd9ced486eb1d153c6da320c812b845cf6b2786ea039bc63def77"} Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.715885 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.716296 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.216279577 +0000 UTC m=+128.864322267 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.749214 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" event={"ID":"7913d656-a73a-4352-bc24-bc8f7be42bfd","Type":"ContainerStarted","Data":"8fbd392e607323fa192da0d20bc00ca89d3df9f5190a1a62931cac41a2bb9a15"} Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.776943 4998 ???:1] "http: TLS handshake error from 192.168.126.11:45668: no serving certificate available for the kubelet" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.827917 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.829324 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.32930858 +0000 UTC m=+128.977351270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.842375 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" event={"ID":"37e64872-8bc7-4ea1-a674-79240aa5c7bf","Type":"ContainerStarted","Data":"7637d2104c7b0c4eb431bb1dcd234b1d972f7d357fab87fcefd40c8d0004ab7a"} Dec 08 18:53:44 crc kubenswrapper[4998]: W1208 18:53:44.844066 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb6f55ae2_a8ab_42e7_906f_142a30fa8c07.slice/crio-54550028945c28befc9a476a60f16718b0fe309b1e8ceccbc168c135ef2a6d6f WatchSource:0}: Error finding container 54550028945c28befc9a476a60f16718b0fe309b1e8ceccbc168c135ef2a6d6f: Status 404 returned error can't find the container with id 54550028945c28befc9a476a60f16718b0fe309b1e8ceccbc168c135ef2a6d6f Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.907587 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podStartSLOduration=100.90757056 podStartE2EDuration="1m40.90757056s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:44.90682491 +0000 UTC m=+128.554867600" watchObservedRunningTime="2025-12-08 18:53:44.90757056 +0000 UTC m=+128.555613250" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.908398 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-mm799" podStartSLOduration=100.908391522 podStartE2EDuration="1m40.908391522s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:44.855333172 +0000 UTC m=+128.503375872" watchObservedRunningTime="2025-12-08 18:53:44.908391522 +0000 UTC m=+128.556434212" Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.912055 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.924504 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-fct7l"] Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.940072 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:44 crc kubenswrapper[4998]: E1208 18:53:44.941106 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.44108921 +0000 UTC m=+129.089131900 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:44 crc kubenswrapper[4998]: I1208 18:53:44.981318 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" event={"ID":"a9359b08-b878-4a61-b612-0d51c03b3e8d","Type":"ContainerStarted","Data":"d6999630ee20ea341c737c92541fbbe912bce3093755e252b802a8d41b1035df"} Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.024332 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-7pblj" podStartSLOduration=101.024308031 podStartE2EDuration="1m41.024308031s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:44.960627429 +0000 UTC m=+128.608670119" watchObservedRunningTime="2025-12-08 18:53:45.024308031 +0000 UTC m=+128.672350721" Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.024885 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.057187 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.063423 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.063951 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.563934314 +0000 UTC m=+129.211977004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.145950 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.146188 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" event={"ID":"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1","Type":"ContainerStarted","Data":"a0a43c7336b081255ce563ebc5a03f61b8399e43e21cba0231af4a2bd41c9595"} Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.149591 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-n2lzg"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.152984 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.161966 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" event={"ID":"e5196d8a-8e2f-4e51-8c30-0553f127a401","Type":"ContainerStarted","Data":"5f85e36b44088b16c90eb47853f8a0f3ce10c7d34d743e8afd6c331c1977e970"} Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.164327 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.164733 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.664714882 +0000 UTC m=+129.312757572 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.183918 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" event={"ID":"4326b45d-f585-474b-8899-ed8c604ca68e","Type":"ContainerStarted","Data":"6126348c51d5f37fd5aa215336bb1bf87c5cc003f273337a93f8d3033de2a51e"} Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.240639 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-lrwkb" event={"ID":"a8d586ff-12aa-4f23-b256-10ece6c0d728","Type":"ContainerStarted","Data":"9ad63a165c48bffabb029919862a62f84a3859a7c35f0b8c2df58ea5da6840e1"} Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.255678 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-2h985"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.277500 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.280361 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.780345753 +0000 UTC m=+129.428388443 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.306650 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-lrwkb" podStartSLOduration=11.306625932 podStartE2EDuration="11.306625932s" podCreationTimestamp="2025-12-08 18:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:45.306312924 +0000 UTC m=+128.954355614" watchObservedRunningTime="2025-12-08 18:53:45.306625932 +0000 UTC m=+128.954668622" Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.380214 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.380418 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.880380542 +0000 UTC m=+129.528423232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.380584 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.381119 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.88110353 +0000 UTC m=+129.529146210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.386767 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.401005 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.425793 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.442330 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr"] Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.481767 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.482154 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:45.982138194 +0000 UTC m=+129.630180884 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.583373 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.583825 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.083802376 +0000 UTC m=+129.731845116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.693076 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.693667 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.193644184 +0000 UTC m=+129.841686874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.796490 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.797125 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.297111733 +0000 UTC m=+129.945154423 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:45 crc kubenswrapper[4998]: I1208 18:53:45.897960 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:45 crc kubenswrapper[4998]: E1208 18:53:45.898433 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.398413615 +0000 UTC m=+130.046456305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.127720 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.128773 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.628754014 +0000 UTC m=+130.276796704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.243060 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.243348 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.743311758 +0000 UTC m=+130.391354458 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.391480 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.392130 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.892117451 +0000 UTC m=+130.540160141 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.395325 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.395424 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.413141 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:46 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:46 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:46 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.413249 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.416159 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n2lzg" event={"ID":"96b56cf4-17af-4d88-8932-c3f613cdd25a","Type":"ContainerStarted","Data":"91f76e1234070018520071fa6b18fdf5b634263359a8953d87363cc3c6bebf00"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.473623 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" event={"ID":"c881d6be-1531-4100-aae2-285bd8863d2a","Type":"ContainerStarted","Data":"831a4ce69cc5e39a3c4f6439e26776e1618cb74169ccfa220b416bd8bc3b2d5e"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.482291 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" event={"ID":"dafdf509-00c5-441e-988a-cc0d6e15d182","Type":"ContainerStarted","Data":"109ce87464a33e3cc34a283f69c49a68f7e6502d63ee388a924a701b13360035"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.492860 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.493445 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:46.993423183 +0000 UTC m=+130.641465883 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.509224 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" event={"ID":"d421eb48-b80b-4051-be83-d48b866bc67b","Type":"ContainerStarted","Data":"b56ce113e32f2abbe13deb16ad973e45a65664575e8b8ce2db511ce6b75eb87d"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.543279 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2h985" event={"ID":"7890294d-7049-4cbf-97fa-9903320b19b2","Type":"ContainerStarted","Data":"c8058416095b94ffd26f549caefa5966d8948d1648dad68e020f2921d950d88b"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.552098 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-ljt5q" podStartSLOduration=102.552081282 podStartE2EDuration="1m42.552081282s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:46.546108142 +0000 UTC m=+130.194150832" watchObservedRunningTime="2025-12-08 18:53:46.552081282 +0000 UTC m=+130.200123972" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.569207 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" event={"ID":"b6f55ae2-a8ab-42e7-906f-142a30fa8c07","Type":"ContainerStarted","Data":"54550028945c28befc9a476a60f16718b0fe309b1e8ceccbc168c135ef2a6d6f"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.572238 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" event={"ID":"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f","Type":"ContainerStarted","Data":"deff57f5969f4b5aaee48377e958860ebb0a8ed870a67948b3a3685fe66988d3"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.578858 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" event={"ID":"8589a8bc-c86d-4caf-96c4-33c5540d6b5e","Type":"ContainerStarted","Data":"28ed880da71ae7ade36bbb95a14c734c04be11c1a3e40493d816101e2ebb8a63"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.595463 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.595803 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.095790913 +0000 UTC m=+130.743833603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.609898 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" event={"ID":"9f1df1e3-a061-423c-a925-34521ea004f1","Type":"ContainerStarted","Data":"d35c3145d039255aeeb6b9c7200d3514592fe05f54fad8bddd18dd6fcd7b4f97"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.619063 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" event={"ID":"fb4e08c6-c41a-411f-8f82-a0e46f10e791","Type":"ContainerStarted","Data":"f1bfbf45137ae0ea8cfa9e2eb32d626af29c01e9643dbe57336fc8a80d050b59"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.621360 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" event={"ID":"7d5886c2-9e1f-4792-a2c7-2194ea628db9","Type":"ContainerStarted","Data":"982fcc9b12ec8a1dd981111f31fbdbaa36773a3506682d78e0fa2cf366ec3658"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.631720 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fct7l" event={"ID":"7849a053-b0dd-4320-9a46-df76673f332f","Type":"ContainerStarted","Data":"15e4fc48c0f63e894ef6c7804f0e4a9a7fe90502fa7c7c73b7a861fd26eeffa4"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.644865 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" event={"ID":"c7dd20c1-8265-4368-b118-a4a19d492af7","Type":"ContainerStarted","Data":"750fa7f983eb5da3fe048b4e1fff5b532ee34ce6f3026280ab9b82bcbb6963ce"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.656099 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" event={"ID":"0f9476ef-85e4-4a1e-85a8-a225eb1e6552","Type":"ContainerStarted","Data":"405c7976aa8a44af94d50571b977687595d3f8c3bf805d465044933df3e99cf3"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.658519 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" event={"ID":"e5196d8a-8e2f-4e51-8c30-0553f127a401","Type":"ContainerStarted","Data":"799bbc6269f5d48fa8da1aab417265a993812ecb6b9bf8de42241fa4ac814b6c"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.679428 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" event={"ID":"4326b45d-f585-474b-8899-ed8c604ca68e","Type":"ContainerStarted","Data":"f53f0c2c6ac896f3dd4a5211cc6c061fbbc7a7c3a748b3644e535bbcc674706e"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.682251 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" event={"ID":"d5cb67e5-9aca-42f2-8034-6d97ea435de5","Type":"ContainerStarted","Data":"dbf0b2ecf4ba9c051c99bd3381ec950411dae9a86cd2bdfa5ce27f372b952344"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.685461 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" event={"ID":"baa20693-033c-48d7-b6d1-dbe6a846988f","Type":"ContainerStarted","Data":"4a2043be05fc94044f5560451c6cfdf05a88e32b1a1fdfb780b8e2c067314831"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.698139 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-mcp9s" podStartSLOduration=102.698121811 podStartE2EDuration="1m42.698121811s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:46.641375003 +0000 UTC m=+130.289417713" watchObservedRunningTime="2025-12-08 18:53:46.698121811 +0000 UTC m=+130.346164491" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.700038 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.700482 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.200470404 +0000 UTC m=+130.848513094 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.721939 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-rvjkz" podStartSLOduration=102.721918794 podStartE2EDuration="1m42.721918794s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:46.699502908 +0000 UTC m=+130.347545598" watchObservedRunningTime="2025-12-08 18:53:46.721918794 +0000 UTC m=+130.369961484" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.821131 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.823288 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.323271616 +0000 UTC m=+130.971314366 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.829323 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" event={"ID":"bddbccb4-985a-4c6b-8c89-bc36f7cb2db9","Type":"ContainerStarted","Data":"b04f75e84347eb70f5e0db292faedfffc7ecf8220839c0f21ba253f2c8272c7f"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.899619 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" event={"ID":"d6c8acb6-a7a7-4d46-9e9c-35018e8287ed","Type":"ContainerStarted","Data":"4890077ce4d66335a0993df1b9f9fa8e04c4fddbd4702fb81d0e3df467316509"} Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.925334 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:46 crc kubenswrapper[4998]: E1208 18:53:46.925963 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.425944065 +0000 UTC m=+131.073986755 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.938665 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-72tbb" podStartSLOduration=102.938647482 podStartE2EDuration="1m42.938647482s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:46.72441894 +0000 UTC m=+130.372461630" watchObservedRunningTime="2025-12-08 18:53:46.938647482 +0000 UTC m=+130.586690172" Dec 08 18:53:46 crc kubenswrapper[4998]: I1208 18:53:46.994101 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" podStartSLOduration=102.994081184 podStartE2EDuration="1m42.994081184s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:46.936963356 +0000 UTC m=+130.585006066" watchObservedRunningTime="2025-12-08 18:53:46.994081184 +0000 UTC m=+130.642123874" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.028076 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.029246 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.529233758 +0000 UTC m=+131.177276448 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.045426 4998 generic.go:358] "Generic (PLEG): container finished" podID="b7afbd75-761b-4b21-832f-8aeba8f7802f" containerID="20eac1e618884f3106a511e562ef401c521e6649edbc504abd2b015ee5eb7cc7" exitCode=0 Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.045550 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" event={"ID":"b7afbd75-761b-4b21-832f-8aeba8f7802f","Type":"ContainerDied","Data":"20eac1e618884f3106a511e562ef401c521e6649edbc504abd2b015ee5eb7cc7"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.076464 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" event={"ID":"30302926-ce83-487e-9f6d-e225ca6bb1ce","Type":"ContainerStarted","Data":"bab15f6f98f1e908820e362898ed197fb31ba37e17da3308f1f7ef6b93cfe791"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.096558 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-28nnk" podStartSLOduration=102.096530506 podStartE2EDuration="1m42.096530506s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:47.035959767 +0000 UTC m=+130.684002457" watchObservedRunningTime="2025-12-08 18:53:47.096530506 +0000 UTC m=+130.744573196" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.117809 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" event={"ID":"77fa53a8-054b-49f0-8892-3a31f83195cb","Type":"ContainerStarted","Data":"0e58eb040fea5d0755c1d8c8f203f4a008dd3cdc8ad6fbf0f301cba43c1669f9"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.117950 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" event={"ID":"77fa53a8-054b-49f0-8892-3a31f83195cb","Type":"ContainerStarted","Data":"5bfc326fb8f694fba20047035ed2ef1bca665b3b3f80352efb0b1dc8deab4821"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.141087 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.141444 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.641409769 +0000 UTC m=+131.289452459 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.142136 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.142415 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.642407375 +0000 UTC m=+131.290450065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.162137 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-chjpp" event={"ID":"e7ea19de-84b9-47ec-8e9b-036995d15ea6","Type":"ContainerStarted","Data":"5c966d25960becbe05ec62f7302a97b933266b3a1f0646991de2010e89a73fa6"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.163193 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.192261 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" event={"ID":"2e0a5cfb-edf8-471a-a968-dbc68e8639fb","Type":"ContainerStarted","Data":"d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.193211 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.193867 4998 patch_prober.go:28] interesting pod/console-operator-67c89758df-chjpp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.194207 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-chjpp" podUID="e7ea19de-84b9-47ec-8e9b-036995d15ea6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.216677 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-chjpp" podStartSLOduration=103.216662378 podStartE2EDuration="1m43.216662378s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:47.214752977 +0000 UTC m=+130.862795667" watchObservedRunningTime="2025-12-08 18:53:47.216662378 +0000 UTC m=+130.864705068" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.217603 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" event={"ID":"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7","Type":"ContainerStarted","Data":"109ed352cc7128d8d9e38e8fc24404f2dd1c81c6cfd6ea4b1be0d3b798c97955"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.225902 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" event={"ID":"1cb69d21-7314-46fb-b857-61fa141975a4","Type":"ContainerStarted","Data":"c71d01dd0ec3729634a5503d594688ae722a4700758d709b20af984563e13e83"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.229900 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l7dsk" event={"ID":"d389f524-3928-4915-857d-d54a0f164df8","Type":"ContainerStarted","Data":"fd18adf51c96306f1b94229045a7b56cfe869b24d22ab9b2ffb9503ae50278d4"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.233443 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" event={"ID":"cf0f12dc-bfab-4351-8a07-9d6636b102af","Type":"ContainerStarted","Data":"0908cdc2ca1fbb703b60f23b9e2832baa9e10e9b810eaf60927288193398fe4d"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.233497 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" event={"ID":"cf0f12dc-bfab-4351-8a07-9d6636b102af","Type":"ContainerStarted","Data":"00235b56d25199f6a947eed3cd052e35328b748a2e469c543d32222d58755b24"} Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.244626 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.247489 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.747464446 +0000 UTC m=+131.395507136 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.271083 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podStartSLOduration=13.271064703 podStartE2EDuration="13.271064703s" podCreationTimestamp="2025-12-08 18:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:47.270161939 +0000 UTC m=+130.918204629" watchObservedRunningTime="2025-12-08 18:53:47.271064703 +0000 UTC m=+130.919107393" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.317822 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-nxrnn" podStartSLOduration=103.317804935 podStartE2EDuration="1m43.317804935s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:47.31725646 +0000 UTC m=+130.965299160" watchObservedRunningTime="2025-12-08 18:53:47.317804935 +0000 UTC m=+130.965847625" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.350476 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.350898 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:47.850885294 +0000 UTC m=+131.498927984 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.404269 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:47 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:47 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:47 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.404334 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.615295 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.615891 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.115865814 +0000 UTC m=+131.763908504 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.658339 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.667740 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.670883 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.681782 4998 patch_prober.go:28] interesting pod/console-64d44f6ddf-6trs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.681841 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6trs2" podUID="a3223550-df04-4846-a030-56e1f6763d0b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.723442 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.725169 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.225152297 +0000 UTC m=+131.873194977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.827133 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.827201 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.327171468 +0000 UTC m=+131.975214158 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.836782 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.837286 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.337272497 +0000 UTC m=+131.985315187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.943398 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.943518 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.443499188 +0000 UTC m=+132.091541868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:47 crc kubenswrapper[4998]: I1208 18:53:47.944129 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:47 crc kubenswrapper[4998]: E1208 18:53:47.944421 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.444413073 +0000 UTC m=+132.092455763 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.045572 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.046020 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.546002223 +0000 UTC m=+132.194044913 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.151343 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.153117 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.653096617 +0000 UTC m=+132.301139307 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.253841 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.254329 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.754309146 +0000 UTC m=+132.402351846 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.358721 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.359292 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.859270755 +0000 UTC m=+132.507313445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.392908 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:48 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:48 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:48 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.393291 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.433533 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" event={"ID":"a9359b08-b878-4a61-b612-0d51c03b3e8d","Type":"ContainerStarted","Data":"ce8a089c081b029afdb8ad93192f94c8cc2fa0328e0a209e429782944d86a4a2"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.437487 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.451457 4998 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-ncn97 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.451569 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.460078 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.462138 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:48.962114648 +0000 UTC m=+132.610157338 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.472072 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-chjpp" event={"ID":"e7ea19de-84b9-47ec-8e9b-036995d15ea6","Type":"ContainerStarted","Data":"e02b5e092e162b1f20df18cb8724a71d566da99084e3f8abc67c7cef9a32e81b"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.476504 4998 patch_prober.go:28] interesting pod/console-operator-67c89758df-chjpp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.476556 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-chjpp" podUID="e7ea19de-84b9-47ec-8e9b-036995d15ea6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.489880 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" event={"ID":"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7","Type":"ContainerStarted","Data":"6ed471f92818eb082fb9e298575bb7a0b1d45d9b71291dcacdaa1c58924fd4b7"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.490555 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.492706 4998 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8w69c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.492782 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.509214 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" event={"ID":"c881d6be-1531-4100-aae2-285bd8863d2a","Type":"ContainerStarted","Data":"420ef35906f6d83b8e9ec4a3c7720e618cc4b61cc938dea3ce5ce7db19014824"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.512125 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.513746 4998 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-5r5cb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.513845 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" podUID="c881d6be-1531-4100-aae2-285bd8863d2a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.519841 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" podStartSLOduration=104.51980824 podStartE2EDuration="1m44.51980824s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.50778121 +0000 UTC m=+132.155823900" watchObservedRunningTime="2025-12-08 18:53:48.51980824 +0000 UTC m=+132.167850930" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.543563 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" event={"ID":"f7ba83f0-26ea-43d7-bcba-b03f02e2c98f","Type":"ContainerStarted","Data":"066cba7b83cb3a94517fe85507bce0b5770bbb0a2bbde44cdeeb02db91969b5a"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.545256 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.546658 4998 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-g9mjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.546761 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" podUID="f7ba83f0-26ea-43d7-bcba-b03f02e2c98f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.562824 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.567935 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" podStartSLOduration=104.567900668 podStartE2EDuration="1m44.567900668s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.566328346 +0000 UTC m=+132.214371046" watchObservedRunningTime="2025-12-08 18:53:48.567900668 +0000 UTC m=+132.215943358" Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.572602 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.072582133 +0000 UTC m=+132.720624823 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.602482 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" podStartSLOduration=103.602464326 podStartE2EDuration="1m43.602464326s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.600544026 +0000 UTC m=+132.248586716" watchObservedRunningTime="2025-12-08 18:53:48.602464326 +0000 UTC m=+132.250507006" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.617115 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" event={"ID":"c7dd20c1-8265-4368-b118-a4a19d492af7","Type":"ContainerStarted","Data":"65ef7522998364fde08f769c912ade156ce939b8ce81c3a2910ccbf3c44dec3d"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.632935 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" podStartSLOduration=103.632902575 podStartE2EDuration="1m43.632902575s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.621694848 +0000 UTC m=+132.269737538" watchObservedRunningTime="2025-12-08 18:53:48.632902575 +0000 UTC m=+132.280945265" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.639883 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" event={"ID":"dece7b61-e8f7-47c4-8b60-2c032a8bb0d1","Type":"ContainerStarted","Data":"f7493d4f4d1d755925f2a01c7f05fcd3733041fe534bf621ad4abe00d5b2a43e"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.644879 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" event={"ID":"4326b45d-f585-474b-8899-ed8c604ca68e","Type":"ContainerStarted","Data":"bdbd04cff0c3055e47c63c220da85f16bce903f7eaf05d965843d0a850addbeb"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.646950 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.650474 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6cdbaa0c4eef1fcaea7d8f929f2a5f9fbf498aac9c6d6f7b551d1b60c2e623b4"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.652258 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.660501 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" event={"ID":"d5cb67e5-9aca-42f2-8034-6d97ea435de5","Type":"ContainerStarted","Data":"037d130d2c963e61930a7daa64232930a4f34b090a62029baf790d1b0fdafb0c"} Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.660549 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.665595 4998 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-pv6bl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.665673 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.667821 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.686868 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.186825587 +0000 UTC m=+132.834868277 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.713435 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.713518 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.770888 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.771380 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.271365904 +0000 UTC m=+132.919408594 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.846256 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=48.846240993 podStartE2EDuration="48.846240993s" podCreationTimestamp="2025-12-08 18:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.751405483 +0000 UTC m=+132.399448173" watchObservedRunningTime="2025-12-08 18:53:48.846240993 +0000 UTC m=+132.494283683" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.847045 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" podStartSLOduration=104.847040955 podStartE2EDuration="1m44.847040955s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:48.845062311 +0000 UTC m=+132.493105001" watchObservedRunningTime="2025-12-08 18:53:48.847040955 +0000 UTC m=+132.495083645" Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.872419 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.873268 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.37324801 +0000 UTC m=+133.021290690 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.936526 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zmsq7"] Dec 08 18:53:48 crc kubenswrapper[4998]: I1208 18:53:48.974773 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:48 crc kubenswrapper[4998]: E1208 18:53:48.975178 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.475165798 +0000 UTC m=+133.123208488 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.075573 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.075906 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.575887294 +0000 UTC m=+133.223929984 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.176902 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.177233 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.677215737 +0000 UTC m=+133.325258427 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.278334 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.278709 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.778666602 +0000 UTC m=+133.426709292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.392578 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.392955 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.892943248 +0000 UTC m=+133.540985938 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.431625 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:49 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:49 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:49 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.431728 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.493908 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.497565 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:49.997530846 +0000 UTC m=+133.645573536 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.512250 4998 patch_prober.go:28] interesting pod/console-operator-67c89758df-chjpp container/console-operator namespace/openshift-console-operator: Liveness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.512299 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console-operator/console-operator-67c89758df-chjpp" podUID="e7ea19de-84b9-47ec-8e9b-036995d15ea6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.598417 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.599175 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.099160097 +0000 UTC m=+133.747202787 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.803769 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.804157 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.304120892 +0000 UTC m=+133.952163582 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.825438 4998 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-pv6bl container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.825511 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.873545 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" event={"ID":"1cb69d21-7314-46fb-b857-61fa141975a4","Type":"ContainerStarted","Data":"adbb693efa65be6521bb75d8385084a5e2336575dbb8ee633b40d0e96a8d1ed7"} Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.897094 4998 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-g9mjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.897185 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" podUID="f7ba83f0-26ea-43d7-bcba-b03f02e2c98f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.897308 4998 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-5r5cb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.897336 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" podUID="c881d6be-1531-4100-aae2-285bd8863d2a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.906714 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:49 crc kubenswrapper[4998]: E1208 18:53:49.907071 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.407054337 +0000 UTC m=+134.055097027 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:49 crc kubenswrapper[4998]: I1208 18:53:49.937445 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l7dsk" event={"ID":"d389f524-3928-4915-857d-d54a0f164df8","Type":"ContainerStarted","Data":"6a1055e329ca994f05a6efbe833a8099b7dfd8c24515b5643338c29bce0d2bdd"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.014392 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.014633 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.514600345 +0000 UTC m=+134.162643035 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.014899 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-n2lzg" event={"ID":"96b56cf4-17af-4d88-8932-c3f613cdd25a","Type":"ContainerStarted","Data":"8acc475522cb321357da3f37c80368cff078d296b5742f7786e77287fabe2bec"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.015071 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.015467 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.515450187 +0000 UTC m=+134.163492877 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.125532 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" event={"ID":"dafdf509-00c5-441e-988a-cc0d6e15d182","Type":"ContainerStarted","Data":"740f5161e30bc9a407b231228c3321582234317b48c43f24d765034f404ec522"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.126846 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.127665 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.129652 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.6296197 +0000 UTC m=+134.277662400 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.131499 4998 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-sdn44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.131577 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" podUID="dafdf509-00c5-441e-988a-cc0d6e15d182" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.154631 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" event={"ID":"b6f55ae2-a8ab-42e7-906f-142a30fa8c07","Type":"ContainerStarted","Data":"32d897e22a1f3bc3c2c58bf0dcad51fdee079690bcb8e110cc3ad98e5b28440d"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.177870 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" event={"ID":"8589a8bc-c86d-4caf-96c4-33c5540d6b5e","Type":"ContainerStarted","Data":"2f38b8fa266588d771e664a0ca78e8e9e970b988e5d84ebc9e8a622d5c6110d2"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.184930 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" event={"ID":"fb4e08c6-c41a-411f-8f82-a0e46f10e791","Type":"ContainerStarted","Data":"1d5d96a8ff3ef0185fd3f64022954a96ffedf61d2f50db4e15819c47f13e15ef"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.192308 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" event={"ID":"7d5886c2-9e1f-4792-a2c7-2194ea628db9","Type":"ContainerStarted","Data":"fa2d2898bda3f73754fe582be91c92574e4d5f7862a0acf2c660a0141463d075"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.227411 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-fct7l" event={"ID":"7849a053-b0dd-4320-9a46-df76673f332f","Type":"ContainerStarted","Data":"60f5d49a39349096f09153b6a2dcb5ec19d2b6b446b968f607fb1fc2623d6ee7"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.229440 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.232746 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.732727839 +0000 UTC m=+134.380770529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.252292 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-zsgbl" podStartSLOduration=106.252259408 podStartE2EDuration="1m46.252259408s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.154262694 +0000 UTC m=+133.802305374" watchObservedRunningTime="2025-12-08 18:53:50.252259408 +0000 UTC m=+133.900302098" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.291321 4998 ???:1] "http: TLS handshake error from 192.168.126.11:45672: no serving certificate available for the kubelet" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.307567 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" event={"ID":"b7afbd75-761b-4b21-832f-8aeba8f7802f","Type":"ContainerStarted","Data":"b7ea56aac20f65c72e5c333f3298e1359ffcda4b95f9262b142573c145b99822"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.308210 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.368444 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.368832 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.868811815 +0000 UTC m=+134.516854505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.372402 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" event={"ID":"30302926-ce83-487e-9f6d-e225ca6bb1ce","Type":"ContainerStarted","Data":"71a5b0608e556637074a65d53b90315b4254235fddb0c69aff37e8f0e9c38d13"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.373105 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.381898 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" podStartSLOduration=105.381877532 podStartE2EDuration="1m45.381877532s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.267370639 +0000 UTC m=+133.915413329" watchObservedRunningTime="2025-12-08 18:53:50.381877532 +0000 UTC m=+134.029920222" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.388534 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.393715 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:50 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:50 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:50 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.393776 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.395500 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" event={"ID":"77fa53a8-054b-49f0-8892-3a31f83195cb","Type":"ContainerStarted","Data":"df824dbc85aec3a17f2231b982314900de804d27d984743bfa8f126d649075a1"} Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397087 4998 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-ncn97 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397152 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": dial tcp 10.217.0.19:6443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397224 4998 patch_prober.go:28] interesting pod/console-operator-67c89758df-chjpp container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397241 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-chjpp" podUID="e7ea19de-84b9-47ec-8e9b-036995d15ea6" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397284 4998 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-g9mjg container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.397303 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" podUID="f7ba83f0-26ea-43d7-bcba-b03f02e2c98f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.38:8443/healthz\": dial tcp 10.217.0.38:8443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.398395 4998 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8w69c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.398423 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.399743 4998 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-pv6bl container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.399773 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": dial tcp 10.217.0.24:8080: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.405444 4998 patch_prober.go:28] interesting pod/catalog-operator-75ff9f647d-5r5cb container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" start-of-body= Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.405544 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" podUID="c881d6be-1531-4100-aae2-285bd8863d2a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.26:8443/healthz\": dial tcp 10.217.0.26:8443: connect: connection refused" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.477362 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.484362 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:50.984344644 +0000 UTC m=+134.632387334 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.575218 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-n2lzg" podStartSLOduration=16.575194828 podStartE2EDuration="16.575194828s" podCreationTimestamp="2025-12-08 18:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.397498137 +0000 UTC m=+134.045540827" watchObservedRunningTime="2025-12-08 18:53:50.575194828 +0000 UTC m=+134.223237518" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.577495 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-m9nr7" podStartSLOduration=106.577481829 podStartE2EDuration="1m46.577481829s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.574608993 +0000 UTC m=+134.222651683" watchObservedRunningTime="2025-12-08 18:53:50.577481829 +0000 UTC m=+134.225524519" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.579608 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.579802 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.079773099 +0000 UTC m=+134.727815789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.580476 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.580877 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.080870549 +0000 UTC m=+134.728913239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.718500 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.718812 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.218792154 +0000 UTC m=+134.866834844 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.786677 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-8f7f5" podStartSLOduration=106.786658844 podStartE2EDuration="1m46.786658844s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.784481082 +0000 UTC m=+134.432523772" watchObservedRunningTime="2025-12-08 18:53:50.786658844 +0000 UTC m=+134.434701524" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.883094 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p6btr" podStartSLOduration=105.883075031 podStartE2EDuration="1m45.883075031s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.879591541 +0000 UTC m=+134.527634241" watchObservedRunningTime="2025-12-08 18:53:50.883075031 +0000 UTC m=+134.531117721" Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.925130 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:50 crc kubenswrapper[4998]: E1208 18:53:50.927979 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.42795605 +0000 UTC m=+135.075998740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:50 crc kubenswrapper[4998]: I1208 18:53:50.940534 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" podStartSLOduration=105.940498167 podStartE2EDuration="1m45.940498167s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:50.844724838 +0000 UTC m=+134.492767548" watchObservedRunningTime="2025-12-08 18:53:50.940498167 +0000 UTC m=+134.588540857" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.020425 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-pjb8j" podStartSLOduration=107.020400003 podStartE2EDuration="1m47.020400003s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.019847858 +0000 UTC m=+134.667890548" watchObservedRunningTime="2025-12-08 18:53:51.020400003 +0000 UTC m=+134.668442703" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.030207 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.030875 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.5308512 +0000 UTC m=+135.178893890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.137252 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.138092 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.638068435 +0000 UTC m=+135.286111135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.198488 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-fct7l" podStartSLOduration=106.198455895 podStartE2EDuration="1m46.198455895s" podCreationTimestamp="2025-12-08 18:52:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.197061745 +0000 UTC m=+134.845104435" watchObservedRunningTime="2025-12-08 18:53:51.198455895 +0000 UTC m=+134.846498585" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.200319 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" podStartSLOduration=107.200303357 podStartE2EDuration="1m47.200303357s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.081288637 +0000 UTC m=+134.729331337" watchObservedRunningTime="2025-12-08 18:53:51.200303357 +0000 UTC m=+134.848346057" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.238425 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.238887 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.738867857 +0000 UTC m=+135.386910547 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.349746 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.350448 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.850426415 +0000 UTC m=+135.498469105 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.392488 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:51 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:51 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:51 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.392589 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.393952 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-6jn5w" podStartSLOduration=107.393929664 podStartE2EDuration="1m47.393929664s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.386048859 +0000 UTC m=+135.034091549" watchObservedRunningTime="2025-12-08 18:53:51.393929664 +0000 UTC m=+135.041972354" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.395421 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" podStartSLOduration=107.395415396 podStartE2EDuration="1m47.395415396s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.314721537 +0000 UTC m=+134.962764227" watchObservedRunningTime="2025-12-08 18:53:51.395415396 +0000 UTC m=+135.043458086" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.427131 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" event={"ID":"c7dd20c1-8265-4368-b118-a4a19d492af7","Type":"ContainerStarted","Data":"278d25531f9fddd03ff24b450bab995d284b5bc2597d92fdc791f6cefc9a8bab"} Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.434976 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.435479 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.451797 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.452519 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" event={"ID":"30302926-ce83-487e-9f6d-e225ca6bb1ce","Type":"ContainerStarted","Data":"d095a93eed0cf4f9471957f8b97b306edbd9d6f116063f2fb341f5f07bf057d8"} Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.452934 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:51.952909314 +0000 UTC m=+135.600952004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.469550 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-l7dsk" event={"ID":"d389f524-3928-4915-857d-d54a0f164df8","Type":"ContainerStarted","Data":"fbfcb423e0f0b19221f16bddf5662c7b7608e4387f9266d565f96299955e9367"} Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.469888 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-l7dsk" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.471492 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.472107 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" gracePeriod=30 Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.481909 4998 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-sdn44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.481994 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" podUID="dafdf509-00c5-441e-988a-cc0d6e15d182" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.482816 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-m2dl6" podStartSLOduration=107.482803795 podStartE2EDuration="1m47.482803795s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.440877341 +0000 UTC m=+135.088920051" watchObservedRunningTime="2025-12-08 18:53:51.482803795 +0000 UTC m=+135.130846475" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.484643 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-fdmn8" podStartSLOduration=107.484639087 podStartE2EDuration="1m47.484639087s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.482484376 +0000 UTC m=+135.130527066" watchObservedRunningTime="2025-12-08 18:53:51.484639087 +0000 UTC m=+135.132681777" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.510705 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.566796 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.569798 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.069781803 +0000 UTC m=+135.717824493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.577162 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-l7dsk" podStartSLOduration=18.577128413 podStartE2EDuration="18.577128413s" podCreationTimestamp="2025-12-08 18:53:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:51.576545666 +0000 UTC m=+135.224588366" watchObservedRunningTime="2025-12-08 18:53:51.577128413 +0000 UTC m=+135.225171103" Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.671895 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.672122 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.172084817 +0000 UTC m=+135.820127507 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.672923 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.673581 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.17356397 +0000 UTC m=+135.821606660 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.774342 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.774573 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.274533797 +0000 UTC m=+135.922576487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.775476 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.775852 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.275842953 +0000 UTC m=+135.923885643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.876798 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.877061 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.377021546 +0000 UTC m=+136.025064236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:51 crc kubenswrapper[4998]: I1208 18:53:51.877448 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:51 crc kubenswrapper[4998]: E1208 18:53:51.878041 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.378029884 +0000 UTC m=+136.026072574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:51.978450 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.056924 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.556875679 +0000 UTC m=+136.204918589 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.080585 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.081046 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.581032317 +0000 UTC m=+136.229075007 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.184072 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.184614 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.684591547 +0000 UTC m=+136.332634237 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.285228 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.285517 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.785504312 +0000 UTC m=+136.433547002 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.395366 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.395646 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.89562891 +0000 UTC m=+136.543671600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.400844 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:52 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:52 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:52 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.400895 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.405855 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.409574 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.412819 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.417380 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.481187 4998 patch_prober.go:28] interesting pod/oauth-openshift-66458b6674-ncn97 container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.19:6443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.481457 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.19:6443/healthz\": context deadline exceeded" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.484091 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.498965 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.499730 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:52.999714474 +0000 UTC m=+136.647757164 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.525875 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2h985" event={"ID":"7890294d-7049-4cbf-97fa-9903320b19b2","Type":"ContainerStarted","Data":"353cd223a20c153077b6c066619f64cdac17ddc0999a78e6d25800b589b76db8"} Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.526183 4998 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-sdn44 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.526236 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" podUID="dafdf509-00c5-441e-988a-cc0d6e15d182" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.528155 4998 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-cz726 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" start-of-body= Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.528218 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" podUID="b7afbd75-761b-4b21-832f-8aeba8f7802f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": dial tcp 10.217.0.12:8443: connect: connection refused" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.597659 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-5ht5v" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.600086 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.600191 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.600279 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.100261259 +0000 UTC m=+136.748303949 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.600398 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.702720 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.703019 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.703074 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.703357 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.703379 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.203365226 +0000 UTC m=+136.851407916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.740607 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.741091 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.816371 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.817105 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.317074126 +0000 UTC m=+136.965116816 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:52 crc kubenswrapper[4998]: I1208 18:53:52.924156 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:52 crc kubenswrapper[4998]: E1208 18:53:52.924913 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.424889347 +0000 UTC m=+137.072932037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.026871 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.027234 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.527215882 +0000 UTC m=+137.175258562 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.027610 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.028042 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.528028815 +0000 UTC m=+137.176071505 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.179750 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.180231 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.6802124 +0000 UTC m=+137.328255090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.183127 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.183610 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.683600327 +0000 UTC m=+137.331643017 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.304625 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.305213 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.805195991 +0000 UTC m=+137.453238671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.406118 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:53 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:53 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:53 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.407574 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.406834 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.407136 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:53.907122425 +0000 UTC m=+137.555165115 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.509651 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.509939 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.009923163 +0000 UTC m=+137.657965853 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.612603 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.613032 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.113016931 +0000 UTC m=+137.761059631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.713901 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.714140 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.214087469 +0000 UTC m=+137.862130159 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.714309 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.715344 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.215327544 +0000 UTC m=+137.863370234 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.848514 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.848903 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.348886229 +0000 UTC m=+137.996928919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:53 crc kubenswrapper[4998]: I1208 18:53:53.960539 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:53 crc kubenswrapper[4998]: E1208 18:53:53.960982 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.460966602 +0000 UTC m=+138.109009292 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.062417 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.062987 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.562901546 +0000 UTC m=+138.210944236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.063409 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.064048 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.564036009 +0000 UTC m=+138.212078699 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.164403 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.164811 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.664794159 +0000 UTC m=+138.312836849 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.280773 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.281171 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.781157794 +0000 UTC m=+138.429200484 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.406827 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.407346 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:54.907326588 +0000 UTC m=+138.555369278 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.417816 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:54 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:54 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:54 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.417922 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.508893 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.509353 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.009305863 +0000 UTC m=+138.657348553 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.808456 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.808996 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.3089684 +0000 UTC m=+138.957011090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:54 crc kubenswrapper[4998]: I1208 18:53:54.909648 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:54 crc kubenswrapper[4998]: E1208 18:53:54.911449 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.411418818 +0000 UTC m=+139.059461508 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.064788 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.065507 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.565485437 +0000 UTC m=+139.213528127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.070568 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.142998 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.143965 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.146771 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.147778 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.149828 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.149853 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.149864 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.149875 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.151454 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.151806 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.154307 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.154503 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.206097 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.206752 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.206876 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.206964 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.212443 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82n6c\" (UniqueName: \"kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.212724 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.212889 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.212988 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4q6n\" (UniqueName: \"kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.213080 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsxbz\" (UniqueName: \"kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.213203 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.213278 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.213359 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.213481 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqk5c\" (UniqueName: \"kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.213733 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.71371014 +0000 UTC m=+139.361752830 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.218352 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.307200 4998 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-cz726 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.307327 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" podUID="b7afbd75-761b-4b21-832f-8aeba8f7802f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.315595 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.316109 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.816052535 +0000 UTC m=+139.464095225 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318058 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318288 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d4q6n\" (UniqueName: \"kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318437 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wsxbz\" (UniqueName: \"kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318570 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318676 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318820 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.318964 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vqk5c\" (UniqueName: \"kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338102 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338214 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338284 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338316 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338346 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82n6c\" (UniqueName: \"kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.338514 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.320238 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.320467 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.320286 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.341432 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.341759 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.320362 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.342481 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.842463708 +0000 UTC m=+139.490506398 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.342882 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.344803 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.390728 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:55 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:55 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:55 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.390840 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.417262 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.443847 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.444291 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:55.944275088 +0000 UTC m=+139.592317768 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.446353 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsxbz\" (UniqueName: \"kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz\") pod \"community-operators-cn9sm\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.447223 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqk5c\" (UniqueName: \"kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c\") pod \"community-operators-ljbvf\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.459781 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4q6n\" (UniqueName: \"kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n\") pod \"certified-operators-8hjrw\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.481433 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82n6c\" (UniqueName: \"kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c\") pod \"certified-operators-lcgg8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.546099 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.558242 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.558955 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.058940354 +0000 UTC m=+139.706983044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.614801 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.680271 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.682121 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.682890 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.182864305 +0000 UTC m=+139.830906995 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.688048 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.785333 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.785645 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.285632232 +0000 UTC m=+139.933674922 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:55 crc kubenswrapper[4998]: I1208 18:53:55.909780 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:55 crc kubenswrapper[4998]: E1208 18:53:55.910276 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.410257902 +0000 UTC m=+140.058300592 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.108935 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.109448 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.609429997 +0000 UTC m=+140.257472687 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.116520 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"12c965f4-cc05-4b58-8507-efad8b8726c9","Type":"ContainerStarted","Data":"6b4f4c63375929d7fb08d02bb1c81d34c434e3a224f25b0c331f60f19d79de0e"} Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.210624 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.211333 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.711314218 +0000 UTC m=+140.359356908 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.350294 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.350717 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.85070373 +0000 UTC m=+140.498746420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.397395 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.397494 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.397742 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.406091 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:56 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:56 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:56 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.406219 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.409049 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.453791 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.454234 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.454759 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.454892 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvnh\" (UniqueName: \"kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.454974 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.954945039 +0000 UTC m=+140.602987739 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.455159 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.455293 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.455735 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:56.955707211 +0000 UTC m=+140.603749901 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.528850 4998 patch_prober.go:28] interesting pod/openshift-config-operator-5777786469-cz726 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.528950 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" podUID="b7afbd75-761b-4b21-832f-8aeba8f7802f" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.12:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.654756 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.658128 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.664699 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.664878 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.664952 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-frvnh\" (UniqueName: \"kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.665008 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.665953 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.165936179 +0000 UTC m=+140.813978869 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.666936 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.667665 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.820653 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.821107 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.32109521 +0000 UTC m=+140.969137900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.844150 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.892727 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvnh\" (UniqueName: \"kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh\") pod \"redhat-marketplace-fhqvx\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:56 crc kubenswrapper[4998]: I1208 18:53:56.921545 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:56 crc kubenswrapper[4998]: E1208 18:53:56.921863 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.42184637 +0000 UTC m=+141.069889060 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.014084 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.027880 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.028552 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.528539429 +0000 UTC m=+141.176582119 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.069482 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.138146 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.138507 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55mcl\" (UniqueName: \"kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.138571 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.138664 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.138860 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.638838321 +0000 UTC m=+141.286881011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.249983 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.250296 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-55mcl\" (UniqueName: \"kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.250332 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.250386 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.250775 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.75076088 +0000 UTC m=+141.398803580 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.251255 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.251732 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.274777 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.324210 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.343525 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.352376 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.352898 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.852876059 +0000 UTC m=+141.500918749 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.353025 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.353077 4998 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.412267 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:57 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:57 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:57 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.412799 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.464177 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.464670 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:57.964642302 +0000 UTC m=+141.612684992 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.568226 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.568597 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.068580774 +0000 UTC m=+141.716623464 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.666437 4998 patch_prober.go:28] interesting pod/console-64d44f6ddf-6trs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.666528 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6trs2" podUID="a3223550-df04-4846-a030-56e1f6763d0b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.670553 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.675959 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-55mcl\" (UniqueName: \"kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl\") pod \"redhat-marketplace-zsnhs\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.680636 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.180596355 +0000 UTC m=+141.828639045 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.774163 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.774568 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.274532291 +0000 UTC m=+141.922574981 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.867705 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.881881 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.882266 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.382250609 +0000 UTC m=+142.030293299 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.890154 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.901557 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.920060 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.965240 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:53:57 crc kubenswrapper[4998]: I1208 18:53:57.985379 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:57 crc kubenswrapper[4998]: E1208 18:53:57.988347 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.48830762 +0000 UTC m=+142.136350310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.000616 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.000698 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4c9t\" (UniqueName: \"kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.000749 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.000916 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.001575 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.501560968 +0000 UTC m=+142.149603658 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.104413 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.104796 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.104838 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f4c9t\" (UniqueName: \"kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.104868 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.105478 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.105570 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.60554962 +0000 UTC m=+142.253592310 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.105874 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.207021 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.207679 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.707659479 +0000 UTC m=+142.355702169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.258107 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.268001 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.310895 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.311150 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.311237 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brmc6\" (UniqueName: \"kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.311265 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.311372 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.811354923 +0000 UTC m=+142.459397613 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.340705 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f4c9t\" (UniqueName: \"kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t\") pod \"redhat-operators-s4gnq\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.381649 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"12c965f4-cc05-4b58-8507-efad8b8726c9","Type":"ContainerStarted","Data":"c47ee19200e5b485f5b4641d995c74c72a83fef47a7ef4f529fba6d802e1694f"} Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.394311 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.402201 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:58 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:58 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:58 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.402656 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.414626 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.414910 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.415180 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-brmc6\" (UniqueName: \"kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.415247 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.416250 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.416821 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.417361 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:58.917340793 +0000 UTC m=+142.565383483 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.484807 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-brmc6\" (UniqueName: \"kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6\") pod \"redhat-operators-lctrl\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.549494 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.551385 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.551919 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.051837584 +0000 UTC m=+142.699880284 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.594107 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.612193 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-cz726" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.653778 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.654161 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.154149059 +0000 UTC m=+142.802191749 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.730009 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.730100 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.753799 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=6.753782227 podStartE2EDuration="6.753782227s" podCreationTimestamp="2025-12-08 18:53:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:53:58.61276903 +0000 UTC m=+142.260811730" watchObservedRunningTime="2025-12-08 18:53:58.753782227 +0000 UTC m=+142.401824917" Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.754628 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.755473 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.255456344 +0000 UTC m=+142.903499034 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.827541 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.871431 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.872083 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.372061857 +0000 UTC m=+143.020104547 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:58 crc kubenswrapper[4998]: I1208 18:53:58.975492 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:58 crc kubenswrapper[4998]: E1208 18:53:58.975829 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.475812512 +0000 UTC m=+143.123855202 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.077828 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.078209 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.578195719 +0000 UTC m=+143.226238409 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.178564 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.179145 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.679124114 +0000 UTC m=+143.327166804 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.280518 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.280858 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.780845692 +0000 UTC m=+143.428888382 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.389702 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.390334 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:53:59.89031099 +0000 UTC m=+143.538353680 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.424266 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.511867 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.512386 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.012361057 +0000 UTC m=+143.660403747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.564170 4998 generic.go:358] "Generic (PLEG): container finished" podID="12c965f4-cc05-4b58-8507-efad8b8726c9" containerID="c47ee19200e5b485f5b4641d995c74c72a83fef47a7ef4f529fba6d802e1694f" exitCode=0 Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.564288 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"12c965f4-cc05-4b58-8507-efad8b8726c9","Type":"ContainerDied","Data":"c47ee19200e5b485f5b4641d995c74c72a83fef47a7ef4f529fba6d802e1694f"} Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.637239 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.639467 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.139448658 +0000 UTC m=+143.787491348 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.651523 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:53:59 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:53:59 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:53:59 crc kubenswrapper[4998]: healthz check failed Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.651635 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.665487 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerStarted","Data":"e28d25a1d5c44196ac39b27ab9a6784d17beb1d7cc9c89db4597d5cdc3778757"} Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.763841 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.764519 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.26450627 +0000 UTC m=+143.912548960 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.876411 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.876598 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.376554522 +0000 UTC m=+144.024597212 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.876922 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:53:59 crc kubenswrapper[4998]: E1208 18:53:59.877268 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.377253472 +0000 UTC m=+144.025296162 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:53:59 crc kubenswrapper[4998]: I1208 18:53:59.891903 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:53:59.964821 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:53:59.986265 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:53:59.990588 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.490477717 +0000 UTC m=+144.138520407 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:53:59.994031 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.011832 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.511810175 +0000 UTC m=+144.159852865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.098131 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.099040 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.599014289 +0000 UTC m=+144.247056979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.201702 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.202080 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.702067005 +0000 UTC m=+144.350109685 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.237060 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.302245 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.302804 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.802782684 +0000 UTC m=+144.450825374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.394169 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:54:00 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:54:00 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:54:00 crc kubenswrapper[4998]: healthz check failed Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.394621 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.404088 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.406436 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.406810 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:00.906797077 +0000 UTC m=+144.554839767 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.407007 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-g9mjg" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.408192 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.409935 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-chjpp" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.417166 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.438797 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-5r5cb" Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.507960 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.508294 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.008240747 +0000 UTC m=+144.656283447 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.610730 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.611382 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.111361275 +0000 UTC m=+144.759403965 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.690379 4998 generic.go:358] "Generic (PLEG): container finished" podID="fee47ac6-07af-412a-a292-3017390e3560" containerID="66c1d82dc6b184a5caf09206bf3a5d2a79052387435caaa2351429aeda2c1ed8" exitCode=0 Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.690541 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerDied","Data":"66c1d82dc6b184a5caf09206bf3a5d2a79052387435caaa2351429aeda2c1ed8"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.709799 4998 generic.go:358] "Generic (PLEG): container finished" podID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerID="645eb174f7bd1e988f8b3117db952b91271fe6e8a0fbb44f9d8b9178417b9b01" exitCode=0 Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.709959 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerDied","Data":"645eb174f7bd1e988f8b3117db952b91271fe6e8a0fbb44f9d8b9178417b9b01"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.710007 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerStarted","Data":"e9f9bfd310b1d14086aa44fff039869ee6c5858a625d4abe13d7d547f5fc4e4d"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.713138 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.713627 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.213601767 +0000 UTC m=+144.861644457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.806834 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerStarted","Data":"a39da08c34ee47bffc4a70044a43d81c684279c8a8b5cabe6e8e831d011b6cd1"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.808660 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerStarted","Data":"aef07da86fd892fd11f72dd62406bebd6e12e091d616b233c476f5f936c6bca8"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.814966 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.816255 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.31622957 +0000 UTC m=+144.964272260 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.817442 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerStarted","Data":"b3e3df987b26140b02c1c69425652a61f5b7fa8e1269e2c9c09b4a59c8976728"} Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.917547 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:00 crc kubenswrapper[4998]: E1208 18:54:00.917980 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.417962079 +0000 UTC m=+145.066004769 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:00 crc kubenswrapper[4998]: I1208 18:54:00.980787 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.034405 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.034868 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.534845808 +0000 UTC m=+145.182888498 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.098722 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.099045 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.144486 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.144944 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.644918704 +0000 UTC m=+145.292961394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.247054 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.247427 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.747412724 +0000 UTC m=+145.395455414 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.252414 4998 ???:1] "http: TLS handshake error from 192.168.126.11:39616: no serving certificate available for the kubelet" Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.349238 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.349758 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.849732659 +0000 UTC m=+145.497775349 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.391884 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:54:01 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:54:01 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:54:01 crc kubenswrapper[4998]: healthz check failed Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.391958 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.455359 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.455947 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:01.955923414 +0000 UTC m=+145.603966094 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.516915 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.539465 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-l7dsk" Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.558091 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.559659 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.059618998 +0000 UTC m=+145.707661688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.661359 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.663492 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.163469326 +0000 UTC m=+145.811512016 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.762680 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.763395 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.263377113 +0000 UTC m=+145.911419803 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.865613 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.893372 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.393345885 +0000 UTC m=+146.041388575 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.931369 4998 generic.go:358] "Generic (PLEG): container finished" podID="3b36276e-af0d-4657-912a-df7c533bf822" containerID="08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43" exitCode=0 Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.931593 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerDied","Data":"08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43"} Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.964603 4998 generic.go:358] "Generic (PLEG): container finished" podID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerID="0ae060c227b04ff7fab9cad6fcdaff9cdda6bfe9336345b01293e3650f1e35df" exitCode=0 Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.964766 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerDied","Data":"0ae060c227b04ff7fab9cad6fcdaff9cdda6bfe9336345b01293e3650f1e35df"} Dec 08 18:54:01 crc kubenswrapper[4998]: I1208 18:54:01.976432 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:01 crc kubenswrapper[4998]: E1208 18:54:01.983824 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.483793021 +0000 UTC m=+146.131835712 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.001974 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.003022 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.502997389 +0000 UTC m=+146.151040079 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.041800 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"12c965f4-cc05-4b58-8507-efad8b8726c9","Type":"ContainerDied","Data":"6b4f4c63375929d7fb08d02bb1c81d34c434e3a224f25b0c331f60f19d79de0e"} Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.041874 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6b4f4c63375929d7fb08d02bb1c81d34c434e3a224f25b0c331f60f19d79de0e" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.043732 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.045874 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerStarted","Data":"3568123769f56a1ce6d5b9841615c0f162f65baba4b39bc9c52b106be80cf77e"} Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.083501 4998 generic.go:358] "Generic (PLEG): container finished" podID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerID="86d69b6e7cadf605e4003a9089cc1d12256107455e9a0d64c32f043738f844be" exitCode=0 Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.083633 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerDied","Data":"86d69b6e7cadf605e4003a9089cc1d12256107455e9a0d64c32f043738f844be"} Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.105481 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.105532 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access\") pod \"12c965f4-cc05-4b58-8507-efad8b8726c9\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.105642 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir\") pod \"12c965f4-cc05-4b58-8507-efad8b8726c9\" (UID: \"12c965f4-cc05-4b58-8507-efad8b8726c9\") " Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.106365 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "12c965f4-cc05-4b58-8507-efad8b8726c9" (UID: "12c965f4-cc05-4b58-8507-efad8b8726c9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.106451 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.606431896 +0000 UTC m=+146.254474586 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.152282 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "12c965f4-cc05-4b58-8507-efad8b8726c9" (UID: "12c965f4-cc05-4b58-8507-efad8b8726c9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.156065 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerStarted","Data":"051b11639e4125a479ead239ed52f9a4efd47baf32634901defb6335ac0fc9f9"} Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.165891 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerStarted","Data":"d4e2d5976de7e41df5083cc5404b6feab2175c3819c083fa731da04c97b5ea5e"} Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.208863 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.209044 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/12c965f4-cc05-4b58-8507-efad8b8726c9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.209061 4998 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/12c965f4-cc05-4b58-8507-efad8b8726c9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.210033 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.710016886 +0000 UTC m=+146.358059576 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.317398 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.318068 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.818051344 +0000 UTC m=+146.466094024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.411358 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:54:02 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:54:02 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:54:02 crc kubenswrapper[4998]: healthz check failed Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.411430 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.419053 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.419445 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:02.919433222 +0000 UTC m=+146.567475912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.520872 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.521482 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.021429747 +0000 UTC m=+146.669472437 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.536451 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-sdn44" Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.622574 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.623588 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.123576347 +0000 UTC m=+146.771619037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.747840 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.748321 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.2483034 +0000 UTC m=+146.896346090 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.850819 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.852083 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.352048046 +0000 UTC m=+147.000090726 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:02 crc kubenswrapper[4998]: I1208 18:54:02.964210 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:02 crc kubenswrapper[4998]: E1208 18:54:02.964658 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.464626723 +0000 UTC m=+147.112669413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.075969 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.076436 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.576421717 +0000 UTC m=+147.224464407 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.178400 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.178702 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.678673041 +0000 UTC m=+147.326715721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.222548 4998 generic.go:358] "Generic (PLEG): container finished" podID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerID="ed0b6c1c72889738cf612798601709456d67aa8b8e2d50693ba60473a6015bc5" exitCode=0 Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.222744 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerDied","Data":"ed0b6c1c72889738cf612798601709456d67aa8b8e2d50693ba60473a6015bc5"} Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.254757 4998 generic.go:358] "Generic (PLEG): container finished" podID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerID="ea2e97e387ed5c73b55207fa98f7436ccb693d2aeae852e9bd74b915582caca9" exitCode=0 Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.254906 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerDied","Data":"ea2e97e387ed5c73b55207fa98f7436ccb693d2aeae852e9bd74b915582caca9"} Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.257065 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.257698 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="12c965f4-cc05-4b58-8507-efad8b8726c9" containerName="pruner" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.257808 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="12c965f4-cc05-4b58-8507-efad8b8726c9" containerName="pruner" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.257908 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="12c965f4-cc05-4b58-8507-efad8b8726c9" containerName="pruner" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.268943 4998 generic.go:358] "Generic (PLEG): container finished" podID="3af11570-35c5-4991-ae53-bfd38cdea120" containerID="f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30" exitCode=0 Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.274601 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerDied","Data":"f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30"} Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.274860 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.284228 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.284493 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.284496 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.285243 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.785228746 +0000 UTC m=+147.433271436 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.309008 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.309314 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2h985" event={"ID":"7890294d-7049-4cbf-97fa-9903320b19b2","Type":"ContainerStarted","Data":"73415181f86ecbcba552ebf61af8d304f7bd0cf9207e5e0c2d657b48999766fa"} Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.317313 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.386588 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.387029 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.387224 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.388077 4998 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-hnz2f container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Dec 08 18:54:03 crc kubenswrapper[4998]: [-]has-synced failed: reason withheld Dec 08 18:54:03 crc kubenswrapper[4998]: [+]process-running ok Dec 08 18:54:03 crc kubenswrapper[4998]: healthz check failed Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.388155 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" podUID="d6c8acb6-a7a7-4d46-9e9c-35018e8287ed" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.388307 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.888280871 +0000 UTC m=+147.536323561 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.488835 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.489253 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.489323 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.489657 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:03.9896409 +0000 UTC m=+147.637683600 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.489836 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.556413 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.590285 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.590767 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.090744859 +0000 UTC m=+147.738787550 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.609549 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.692086 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.692511 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.192494707 +0000 UTC m=+147.840537397 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.862968 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.863742 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.363725316 +0000 UTC m=+148.011768006 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:03 crc kubenswrapper[4998]: I1208 18:54:03.964798 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:03 crc kubenswrapper[4998]: E1208 18:54:03.965340 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.46532768 +0000 UTC m=+148.113370370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.068455 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.068658 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.568625133 +0000 UTC m=+148.216667823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.069250 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.069595 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.56958662 +0000 UTC m=+148.217629310 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.273919 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.274572 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.774554369 +0000 UTC m=+148.422597059 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.307023 4998 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.376288 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.376781 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.87675905 +0000 UTC m=+148.524801740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.401614 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.407029 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-hnz2f" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.435640 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2h985" event={"ID":"7890294d-7049-4cbf-97fa-9903320b19b2","Type":"ContainerStarted","Data":"6a372fad8a1b3f0f93e5a1fda6f709e7301d297465ca3b260378ecb7ad297358"} Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.438355 4998 generic.go:358] "Generic (PLEG): container finished" podID="7d5886c2-9e1f-4792-a2c7-2194ea628db9" containerID="fa2d2898bda3f73754fe582be91c92574e4d5f7862a0acf2c660a0141463d075" exitCode=0 Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.438624 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" event={"ID":"7d5886c2-9e1f-4792-a2c7-2194ea628db9","Type":"ContainerDied","Data":"fa2d2898bda3f73754fe582be91c92574e4d5f7862a0acf2c660a0141463d075"} Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.479373 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.480706 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:04.9806658 +0000 UTC m=+148.628708490 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.595286 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.598897 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.098883069 +0000 UTC m=+148.746925759 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.701141 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.701177 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.201159572 +0000 UTC m=+148.849202262 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.701861 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.702193 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.202181522 +0000 UTC m=+148.850224202 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.809677 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.809916 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.809997 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.810118 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.810141 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.817051 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.817300 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.817487 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.818173 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.318136664 +0000 UTC m=+148.966179354 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.822279 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.827338 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.830369 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.838918 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.852268 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.868202 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.896130 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.911882 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.911935 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:04 crc kubenswrapper[4998]: E1208 18:54:04.912285 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.412272116 +0000 UTC m=+149.060314806 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.916243 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 18:54:04 crc kubenswrapper[4998]: I1208 18:54:04.945347 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ab88c832-775d-46c6-9167-aa51d0574b17-metrics-certs\") pod \"network-metrics-daemon-z9wmf\" (UID: \"ab88c832-775d-46c6-9167-aa51d0574b17\") " pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.015950 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:05 crc kubenswrapper[4998]: E1208 18:54:05.016208 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.516188906 +0000 UTC m=+149.164231596 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.123781 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:05 crc kubenswrapper[4998]: E1208 18:54:05.124233 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.624216314 +0000 UTC m=+149.272259004 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-kj9vm" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.125530 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.136811 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-z9wmf" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.139572 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.153108 4998 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-12-08T18:54:04.307103516Z","UUID":"137f5416-ee86-42c6-bc61-abc65bcc4bc5","Handler":null,"Name":"","Endpoint":""} Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.167249 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.254399 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:05 crc kubenswrapper[4998]: E1208 18:54:05.254796 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2025-12-08 18:54:05.754779473 +0000 UTC m=+149.402822163 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Dec 08 18:54:05 crc kubenswrapper[4998]: W1208 18:54:05.279198 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod16856063_5d76_4d2c_a6aa_4fd3268b67b9.slice/crio-100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2 WatchSource:0}: Error finding container 100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2: Status 404 returned error can't find the container with id 100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2 Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.279577 4998 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.279610 4998 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.372550 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.447845 4998 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.447898 4998 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.486281 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"16856063-5d76-4d2c-a6aa-4fd3268b67b9","Type":"ContainerStarted","Data":"100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2"} Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.545712 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-kj9vm\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.576462 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.706154 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.723040 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:54:05 crc kubenswrapper[4998]: I1208 18:54:05.723308 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.387996 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.388803 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.388857 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.389416 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e"} pod="openshift-console/downloads-747b44746d-ln56w" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.389479 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" containerID="cri-o://1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e" gracePeriod=2 Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.392504 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.392589 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:06 crc kubenswrapper[4998]: W1208 18:54:06.506367 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf863fff9_286a_45fa_b8f0_8a86994b8440.slice/crio-8b9080bb0b6d16ab701cda25d5cdecd483b4df88887099a3c1cf84265edaad2e WatchSource:0}: Error finding container 8b9080bb0b6d16ab701cda25d5cdecd483b4df88887099a3c1cf84265edaad2e: Status 404 returned error can't find the container with id 8b9080bb0b6d16ab701cda25d5cdecd483b4df88887099a3c1cf84265edaad2e Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.509403 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-2h985" event={"ID":"7890294d-7049-4cbf-97fa-9903320b19b2","Type":"ContainerStarted","Data":"44c3f8f5ee67e38d789b81b2925d74ee6ccfc702ba827084e41bbc1e54909cf0"} Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.512353 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.512525 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"06624971d6387b3f6cf9823744bef35b9d2f5c7651d46e6787c2d9a90726ebe7"} Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.533363 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"ce52114964428d112683b26292adfe0504dc9e56410b074c692ab2991252aefa"} Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.567321 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-2h985" podStartSLOduration=32.567301464 podStartE2EDuration="32.567301464s" podCreationTimestamp="2025-12-08 18:53:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:06.567101138 +0000 UTC m=+150.215143828" watchObservedRunningTime="2025-12-08 18:54:06.567301464 +0000 UTC m=+150.215344154" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.705207 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume\") pod \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.705377 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") pod \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.705400 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6d7s\" (UniqueName: \"kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s\") pod \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\" (UID: \"7d5886c2-9e1f-4792-a2c7-2194ea628db9\") " Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.709342 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume" (OuterVolumeSpecName: "config-volume") pod "7d5886c2-9e1f-4792-a2c7-2194ea628db9" (UID: "7d5886c2-9e1f-4792-a2c7-2194ea628db9"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.724789 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s" (OuterVolumeSpecName: "kube-api-access-j6d7s") pod "7d5886c2-9e1f-4792-a2c7-2194ea628db9" (UID: "7d5886c2-9e1f-4792-a2c7-2194ea628db9"). InnerVolumeSpecName "kube-api-access-j6d7s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.726013 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7d5886c2-9e1f-4792-a2c7-2194ea628db9" (UID: "7d5886c2-9e1f-4792-a2c7-2194ea628db9"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.755245 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-z9wmf"] Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.793373 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.810503 4998 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7d5886c2-9e1f-4792-a2c7-2194ea628db9-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.810823 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j6d7s\" (UniqueName: \"kubernetes.io/projected/7d5886c2-9e1f-4792-a2c7-2194ea628db9-kube-api-access-j6d7s\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:06 crc kubenswrapper[4998]: I1208 18:54:06.810837 4998 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7d5886c2-9e1f-4792-a2c7-2194ea628db9-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:06 crc kubenswrapper[4998]: W1208 18:54:06.811047 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podab88c832_775d_46c6_9167_aa51d0574b17.slice/crio-0d1581dd13e2e1e4663cb0cef34e1f328da21dab096350ae0a7d9c89fc338bc3 WatchSource:0}: Error finding container 0d1581dd13e2e1e4663cb0cef34e1f328da21dab096350ae0a7d9c89fc338bc3: Status 404 returned error can't find the container with id 0d1581dd13e2e1e4663cb0cef34e1f328da21dab096350ae0a7d9c89fc338bc3 Dec 08 18:54:07 crc kubenswrapper[4998]: E1208 18:54:07.197443 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:07 crc kubenswrapper[4998]: E1208 18:54:07.199984 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:07 crc kubenswrapper[4998]: E1208 18:54:07.202582 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:07 crc kubenswrapper[4998]: E1208 18:54:07.202714 4998 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.376468 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.555880 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"16856063-5d76-4d2c-a6aa-4fd3268b67b9","Type":"ContainerStarted","Data":"d5d9a6f97d7ef0c1915a8f3579125112d9bd464b0116bae2b7fd417813d63f15"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.576953 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" event={"ID":"e1550f97-e782-4bbe-b3a8-3df18c8f4041","Type":"ContainerStarted","Data":"3d7cced41589c4354afa44deef026a82a700bf099405573648db6887f4f37e31"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.577009 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" event={"ID":"e1550f97-e782-4bbe-b3a8-3df18c8f4041","Type":"ContainerStarted","Data":"2fc7c2894647f616852f6394c6f42ffe990e043633c58b21502e0d172492101e"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.578150 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.594126 4998 generic.go:358] "Generic (PLEG): container finished" podID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerID="1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e" exitCode=0 Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.594280 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerDied","Data":"1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.594336 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerStarted","Data":"d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.595677 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.595775 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.595826 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.617000 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"14223a9d95e54fef402e882974fa6605eb9d81477fe9ec5a15b5c13f5052780b"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.617058 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"8b9080bb0b6d16ab701cda25d5cdecd483b4df88887099a3c1cf84265edaad2e"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.639819 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"b1f357ea5598e7142338efd3cd08acbbb3479857855f7a503a871da10e3c40df"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.640329 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.648229 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"39b84379aeed5bcdccdaf7312d0f3e0125b0506653119aac8afbed4abf16d5cf"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.650563 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" event={"ID":"ab88c832-775d-46c6-9167-aa51d0574b17","Type":"ContainerStarted","Data":"0d1581dd13e2e1e4663cb0cef34e1f328da21dab096350ae0a7d9c89fc338bc3"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.669439 4998 patch_prober.go:28] interesting pod/console-64d44f6ddf-6trs2 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.669522 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-6trs2" podUID="a3223550-df04-4846-a030-56e1f6763d0b" containerName="console" probeResult="failure" output="Get \"https://10.217.0.14:8443/health\": dial tcp 10.217.0.14:8443: connect: connection refused" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.686078 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.686466 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420325-5mflt" event={"ID":"7d5886c2-9e1f-4792-a2c7-2194ea628db9","Type":"ContainerDied","Data":"982fcc9b12ec8a1dd981111f31fbdbaa36773a3506682d78e0fa2cf366ec3658"} Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.686487 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="982fcc9b12ec8a1dd981111f31fbdbaa36773a3506682d78e0fa2cf366ec3658" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.825952 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" podStartSLOduration=123.825932819 podStartE2EDuration="2m3.825932819s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:07.789998325 +0000 UTC m=+151.438041015" watchObservedRunningTime="2025-12-08 18:54:07.825932819 +0000 UTC m=+151.473975509" Dec 08 18:54:07 crc kubenswrapper[4998]: I1208 18:54:07.874540 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=4.874512772 podStartE2EDuration="4.874512772s" podCreationTimestamp="2025-12-08 18:54:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:07.827264467 +0000 UTC m=+151.475307167" watchObservedRunningTime="2025-12-08 18:54:07.874512772 +0000 UTC m=+151.522555462" Dec 08 18:54:08 crc kubenswrapper[4998]: I1208 18:54:08.706102 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" event={"ID":"ab88c832-775d-46c6-9167-aa51d0574b17","Type":"ContainerStarted","Data":"b7af9b06c87b8d59336eeee4fc4a13c2fd5071ad6842044251f4324b757a658e"} Dec 08 18:54:08 crc kubenswrapper[4998]: I1208 18:54:08.709210 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:08 crc kubenswrapper[4998]: I1208 18:54:08.709263 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:09 crc kubenswrapper[4998]: I1208 18:54:09.806827 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-z9wmf" event={"ID":"ab88c832-775d-46c6-9167-aa51d0574b17","Type":"ContainerStarted","Data":"fb5895893ab43577d216b3cefa94eb3a9688abf881f7fb77edfe7799dd85a23f"} Dec 08 18:54:13 crc kubenswrapper[4998]: I1208 18:54:13.969241 4998 generic.go:358] "Generic (PLEG): container finished" podID="16856063-5d76-4d2c-a6aa-4fd3268b67b9" containerID="d5d9a6f97d7ef0c1915a8f3579125112d9bd464b0116bae2b7fd417813d63f15" exitCode=0 Dec 08 18:54:13 crc kubenswrapper[4998]: I1208 18:54:13.969525 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"16856063-5d76-4d2c-a6aa-4fd3268b67b9","Type":"ContainerDied","Data":"d5d9a6f97d7ef0c1915a8f3579125112d9bd464b0116bae2b7fd417813d63f15"} Dec 08 18:54:14 crc kubenswrapper[4998]: I1208 18:54:14.000091 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-z9wmf" podStartSLOduration=130.000064953 podStartE2EDuration="2m10.000064953s" podCreationTimestamp="2025-12-08 18:52:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:09.909060491 +0000 UTC m=+153.557103201" watchObservedRunningTime="2025-12-08 18:54:14.000064953 +0000 UTC m=+157.648107653" Dec 08 18:54:16 crc kubenswrapper[4998]: I1208 18:54:16.377585 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:16 crc kubenswrapper[4998]: I1208 18:54:16.378485 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:17 crc kubenswrapper[4998]: E1208 18:54:17.196294 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:17 crc kubenswrapper[4998]: E1208 18:54:17.198615 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:17 crc kubenswrapper[4998]: E1208 18:54:17.200930 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:17 crc kubenswrapper[4998]: E1208 18:54:17.200992 4998 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:54:17 crc kubenswrapper[4998]: I1208 18:54:17.671224 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:54:17 crc kubenswrapper[4998]: I1208 18:54:17.676188 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-6trs2" Dec 08 18:54:18 crc kubenswrapper[4998]: I1208 18:54:18.727504 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:18 crc kubenswrapper[4998]: I1208 18:54:18.727628 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:21 crc kubenswrapper[4998]: I1208 18:54:21.819805 4998 ???:1] "http: TLS handshake error from 192.168.126.11:40406: no serving certificate available for the kubelet" Dec 08 18:54:22 crc kubenswrapper[4998]: I1208 18:54:22.070431 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zmsq7_2e0a5cfb-edf8-471a-a968-dbc68e8639fb/kube-multus-additional-cni-plugins/0.log" Dec 08 18:54:22 crc kubenswrapper[4998]: I1208 18:54:22.070508 4998 generic.go:358] "Generic (PLEG): container finished" podID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" exitCode=137 Dec 08 18:54:22 crc kubenswrapper[4998]: I1208 18:54:22.070643 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" event={"ID":"2e0a5cfb-edf8-471a-a968-dbc68e8639fb","Type":"ContainerDied","Data":"d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c"} Dec 08 18:54:22 crc kubenswrapper[4998]: I1208 18:54:22.548128 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-665bn" Dec 08 18:54:26 crc kubenswrapper[4998]: I1208 18:54:26.378553 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:26 crc kubenswrapper[4998]: I1208 18:54:26.380095 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:27 crc kubenswrapper[4998]: E1208 18:54:27.194143 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:27 crc kubenswrapper[4998]: E1208 18:54:27.194595 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:27 crc kubenswrapper[4998]: E1208 18:54:27.195311 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:27 crc kubenswrapper[4998]: E1208 18:54:27.195467 4998 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:54:28 crc kubenswrapper[4998]: I1208 18:54:28.709295 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:28 crc kubenswrapper[4998]: I1208 18:54:28.709587 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:29 crc kubenswrapper[4998]: I1208 18:54:29.815321 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 18:54:31 crc kubenswrapper[4998]: I1208 18:54:31.508633 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:54:31 crc kubenswrapper[4998]: I1208 18:54:31.509092 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" containerID="cri-o://6ed471f92818eb082fb9e298575bb7a0b1d45d9b71291dcacdaa1c58924fd4b7" gracePeriod=30 Dec 08 18:54:31 crc kubenswrapper[4998]: I1208 18:54:31.537112 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:54:31 crc kubenswrapper[4998]: I1208 18:54:31.537354 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" containerID="cri-o://01104f18717f4eac438036f0af17cd5eeeaf9b0d6b6190794d57738b30d5a02d" gracePeriod=30 Dec 08 18:54:32 crc kubenswrapper[4998]: I1208 18:54:32.417088 4998 generic.go:358] "Generic (PLEG): container finished" podID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerID="01104f18717f4eac438036f0af17cd5eeeaf9b0d6b6190794d57738b30d5a02d" exitCode=0 Dec 08 18:54:32 crc kubenswrapper[4998]: I1208 18:54:32.417947 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" event={"ID":"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f","Type":"ContainerDied","Data":"01104f18717f4eac438036f0af17cd5eeeaf9b0d6b6190794d57738b30d5a02d"} Dec 08 18:54:32 crc kubenswrapper[4998]: I1208 18:54:32.421905 4998 generic.go:358] "Generic (PLEG): container finished" podID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerID="6ed471f92818eb082fb9e298575bb7a0b1d45d9b71291dcacdaa1c58924fd4b7" exitCode=0 Dec 08 18:54:32 crc kubenswrapper[4998]: I1208 18:54:32.422030 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" event={"ID":"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7","Type":"ContainerDied","Data":"6ed471f92818eb082fb9e298575bb7a0b1d45d9b71291dcacdaa1c58924fd4b7"} Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.245074 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.246263 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7d5886c2-9e1f-4792-a2c7-2194ea628db9" containerName="collect-profiles" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.246288 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d5886c2-9e1f-4792-a2c7-2194ea628db9" containerName="collect-profiles" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.246460 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="7d5886c2-9e1f-4792-a2c7-2194ea628db9" containerName="collect-profiles" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.479049 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.480935 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.580619 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.580937 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.682197 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.682308 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.682510 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:34 crc kubenswrapper[4998]: I1208 18:54:34.702628 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:35 crc kubenswrapper[4998]: I1208 18:54:35.228227 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.377522 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.377653 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.377726 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.378616 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672"} pod="openshift-console/downloads-747b44746d-ln56w" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.378634 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.378672 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" containerID="cri-o://d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672" gracePeriod=2 Dec 08 18:54:36 crc kubenswrapper[4998]: I1208 18:54:36.378759 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:37 crc kubenswrapper[4998]: E1208 18:54:37.477589 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:37 crc kubenswrapper[4998]: E1208 18:54:37.482379 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:37 crc kubenswrapper[4998]: E1208 18:54:37.485626 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:37 crc kubenswrapper[4998]: E1208 18:54:37.485678 4998 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:54:37 crc kubenswrapper[4998]: I1208 18:54:37.782624 4998 generic.go:358] "Generic (PLEG): container finished" podID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerID="d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672" exitCode=0 Dec 08 18:54:37 crc kubenswrapper[4998]: I1208 18:54:37.782733 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerDied","Data":"d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672"} Dec 08 18:54:37 crc kubenswrapper[4998]: I1208 18:54:37.782782 4998 scope.go:117] "RemoveContainer" containerID="1347156e2fb0d9e97b4d28669fab5aa67ada94156750eda4ead0ce88cc58744e" Dec 08 18:54:38 crc kubenswrapper[4998]: I1208 18:54:38.716185 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.221185 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.238815 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.244601 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.281784 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.282497 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.283098 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.383938 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.383990 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.384020 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.384097 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.384107 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.494307 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access\") pod \"installer-12-crc\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:39 crc kubenswrapper[4998]: I1208 18:54:39.563767 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:54:40 crc kubenswrapper[4998]: I1208 18:54:40.399190 4998 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-8w69c container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Dec 08 18:54:40 crc kubenswrapper[4998]: I1208 18:54:40.399447 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.17:8443/healthz\": dial tcp 10.217.0.17:8443: connect: connection refused" Dec 08 18:54:41 crc kubenswrapper[4998]: I1208 18:54:41.670319 4998 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-rcm4l container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" start-of-body= Dec 08 18:54:41 crc kubenswrapper[4998]: I1208 18:54:41.670842 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.20:8443/healthz\": dial tcp 10.217.0.20:8443: connect: connection refused" Dec 08 18:54:46 crc kubenswrapper[4998]: I1208 18:54:46.379612 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:46 crc kubenswrapper[4998]: I1208 18:54:46.379726 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:47 crc kubenswrapper[4998]: E1208 18:54:47.193917 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:47 crc kubenswrapper[4998]: E1208 18:54:47.194482 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:47 crc kubenswrapper[4998]: E1208 18:54:47.194709 4998 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" cmd=["/bin/bash","-c","test -f /ready/ready"] Dec 08 18:54:47 crc kubenswrapper[4998]: E1208 18:54:47.194738 4998 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c is running failed: container process not found" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.105939 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.112079 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.116381 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"16856063-5d76-4d2c-a6aa-4fd3268b67b9","Type":"ContainerDied","Data":"100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2"} Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.116421 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="100b5a8ce6de126e6757efa490feac63e45c4a3e115b5bdbba49d71af4f142d2" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.116427 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.118971 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" event={"ID":"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f","Type":"ContainerDied","Data":"8c5101a50fb1a72786392a48ad513f9ff24742105e88c7896305daf4c0a06af6"} Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.119087 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.144135 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.144920 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.144935 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.144954 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="16856063-5d76-4d2c-a6aa-4fd3268b67b9" containerName="pruner" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.144960 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="16856063-5d76-4d2c-a6aa-4fd3268b67b9" containerName="pruner" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.145063 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" containerName="route-controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.145074 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="16856063-5d76-4d2c-a6aa-4fd3268b67b9" containerName="pruner" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.172152 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.173567 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.210864 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-srmsp\" (UniqueName: \"kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp\") pod \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.210953 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert\") pod \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.210985 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access\") pod \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211007 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp\") pod \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211155 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config\") pod \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211190 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca\") pod \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\" (UID: \"4c5c7559-6c14-4e57-ae81-0403c0eb3c6f\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211254 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir\") pod \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\" (UID: \"16856063-5d76-4d2c-a6aa-4fd3268b67b9\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211388 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lb5x\" (UniqueName: \"kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211476 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211499 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211541 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.211587 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.212914 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca" (OuterVolumeSpecName: "client-ca") pod "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" (UID: "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.213214 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "16856063-5d76-4d2c-a6aa-4fd3268b67b9" (UID: "16856063-5d76-4d2c-a6aa-4fd3268b67b9"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.213298 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp" (OuterVolumeSpecName: "tmp") pod "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" (UID: "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.213806 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config" (OuterVolumeSpecName: "config") pod "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" (UID: "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.219457 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp" (OuterVolumeSpecName: "kube-api-access-srmsp") pod "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" (UID: "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f"). InnerVolumeSpecName "kube-api-access-srmsp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.219717 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" (UID: "4c5c7559-6c14-4e57-ae81-0403c0eb3c6f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.230297 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "16856063-5d76-4d2c-a6aa-4fd3268b67b9" (UID: "16856063-5d76-4d2c-a6aa-4fd3268b67b9"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.313503 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.313945 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6lb5x\" (UniqueName: \"kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314029 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314045 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314087 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314149 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314165 4998 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314173 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-srmsp\" (UniqueName: \"kubernetes.io/projected/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-kube-api-access-srmsp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314188 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314196 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/16856063-5d76-4d2c-a6aa-4fd3268b67b9-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314203 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.314216 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.317918 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.318666 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.319530 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zmsq7_2e0a5cfb-edf8-471a-a968-dbc68e8639fb/kube-multus-additional-cni-plugins/0.log" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.319609 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.319610 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.323409 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.338438 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lb5x\" (UniqueName: \"kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x\") pod \"route-controller-manager-7bb66c6589-pcbw6\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.414894 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready\") pod \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.414953 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir\") pod \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415018 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b9qfp\" (UniqueName: \"kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp\") pod \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415072 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist\") pod \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\" (UID: \"2e0a5cfb-edf8-471a-a968-dbc68e8639fb\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415107 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "2e0a5cfb-edf8-471a-a968-dbc68e8639fb" (UID: "2e0a5cfb-edf8-471a-a968-dbc68e8639fb"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415318 4998 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415931 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready" (OuterVolumeSpecName: "ready") pod "2e0a5cfb-edf8-471a-a968-dbc68e8639fb" (UID: "2e0a5cfb-edf8-471a-a968-dbc68e8639fb"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.415817 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "2e0a5cfb-edf8-471a-a968-dbc68e8639fb" (UID: "2e0a5cfb-edf8-471a-a968-dbc68e8639fb"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.429915 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp" (OuterVolumeSpecName: "kube-api-access-b9qfp") pod "2e0a5cfb-edf8-471a-a968-dbc68e8639fb" (UID: "2e0a5cfb-edf8-471a-a968-dbc68e8639fb"). InnerVolumeSpecName "kube-api-access-b9qfp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.447119 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.450658 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-rcm4l"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.497544 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.517432 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b9qfp\" (UniqueName: \"kubernetes.io/projected/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-kube-api-access-b9qfp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.517460 4998 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.517471 4998 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/2e0a5cfb-edf8-471a-a968-dbc68e8639fb-ready\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.612912 4998 scope.go:117] "RemoveContainer" containerID="01104f18717f4eac438036f0af17cd5eeeaf9b0d6b6190794d57738b30d5a02d" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.624317 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.657328 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.657954 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.657969 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.657987 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.657993 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.658095 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" containerName="controller-manager" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.658108 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" containerName="kube-multus-additional-cni-plugins" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.719799 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.719859 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.719951 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.720097 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.720190 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.720346 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles\") pod \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\" (UID: \"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7\") " Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.721708 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.722233 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp" (OuterVolumeSpecName: "tmp") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.722468 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config" (OuterVolumeSpecName: "config") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.722822 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca" (OuterVolumeSpecName: "client-ca") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.725548 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.726024 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5" (OuterVolumeSpecName: "kube-api-access-f2fw5") pod "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" (UID: "f8f4cca3-5c94-40ee-9566-f0d0bf09adc7"). InnerVolumeSpecName "kube-api-access-f2fw5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.793296 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.793458 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823138 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823413 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823464 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823500 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4rht\" (UniqueName: \"kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823789 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.823885 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824003 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824027 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f2fw5\" (UniqueName: \"kubernetes.io/projected/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-kube-api-access-f2fw5\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824042 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824055 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824066 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.824078 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.924750 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.926135 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k4rht\" (UniqueName: \"kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.926067 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.926618 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.927575 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.927643 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.927967 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.928036 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.928600 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.929267 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.932522 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:49 crc kubenswrapper[4998]: I1208 18:54:49.941081 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k4rht\" (UniqueName: \"kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht\") pod \"controller-manager-858f858449-fc8tl\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.107224 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.126873 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-zmsq7_2e0a5cfb-edf8-471a-a968-dbc68e8639fb/kube-multus-additional-cni-plugins/0.log" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.127010 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.128120 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-zmsq7" event={"ID":"2e0a5cfb-edf8-471a-a968-dbc68e8639fb","Type":"ContainerDied","Data":"7e23435211092787427d76b34dad2d82ca7cf406d8d0a00f4b39ac487ccc75c6"} Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.130345 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" event={"ID":"f8f4cca3-5c94-40ee-9566-f0d0bf09adc7","Type":"ContainerDied","Data":"109ed352cc7128d8d9e38e8fc24404f2dd1c81c6cfd6ea4b1be0d3b798c97955"} Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.130448 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-8w69c" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.157987 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zmsq7"] Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.164524 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-zmsq7"] Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.181222 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.181269 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-8w69c"] Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.417731 4998 scope.go:117] "RemoveContainer" containerID="d3133e2847b7715eefd1c9f53fd9e1e0b71cdb068d2bc341250a906419bd6a9c" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.574531 4998 scope.go:117] "RemoveContainer" containerID="6ed471f92818eb082fb9e298575bb7a0b1d45d9b71291dcacdaa1c58924fd4b7" Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.851999 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Dec 08 18:54:50 crc kubenswrapper[4998]: W1208 18:54:50.868104 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4449c7bb_a4da_495c_ace0_a06685eb0618.slice/crio-bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43 WatchSource:0}: Error finding container bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43: Status 404 returned error can't find the container with id bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43 Dec 08 18:54:50 crc kubenswrapper[4998]: I1208 18:54:50.976906 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.115408 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:51 crc kubenswrapper[4998]: W1208 18:54:51.125567 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod976eae66_182e_4714_adca_e8276f39ff21.slice/crio-6167d92d3bdf04deb75949f4a164288144f6874dd570ae73d1f1080ca904dcee WatchSource:0}: Error finding container 6167d92d3bdf04deb75949f4a164288144f6874dd570ae73d1f1080ca904dcee: Status 404 returned error can't find the container with id 6167d92d3bdf04deb75949f4a164288144f6874dd570ae73d1f1080ca904dcee Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.138444 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4449c7bb-a4da-495c-ace0-a06685eb0618","Type":"ContainerStarted","Data":"bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43"} Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.139322 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"b17b1a4a-1d39-41b9-a555-7059787fe36d","Type":"ContainerStarted","Data":"1bc76512d76359648ce265f24808478cb2d13a3d55c519c3708ee15c2d8d0a77"} Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.140128 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" event={"ID":"976eae66-182e-4714-adca-e8276f39ff21","Type":"ContainerStarted","Data":"6167d92d3bdf04deb75949f4a164288144f6874dd570ae73d1f1080ca904dcee"} Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.166720 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.257818 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.260662 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.373395 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e0a5cfb-edf8-471a-a968-dbc68e8639fb" path="/var/lib/kubelet/pods/2e0a5cfb-edf8-471a-a968-dbc68e8639fb/volumes" Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.374202 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c5c7559-6c14-4e57-ae81-0403c0eb3c6f" path="/var/lib/kubelet/pods/4c5c7559-6c14-4e57-ae81-0403c0eb3c6f/volumes" Dec 08 18:54:51 crc kubenswrapper[4998]: I1208 18:54:51.374845 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f8f4cca3-5c94-40ee-9566-f0d0bf09adc7" path="/var/lib/kubelet/pods/f8f4cca3-5c94-40ee-9566-f0d0bf09adc7/volumes" Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.229147 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerStarted","Data":"ff79729f9ceb39f269152e997c8a52cf739d23c0a636847040932880ccaa0ecd"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.238397 4998 generic.go:358] "Generic (PLEG): container finished" podID="fee47ac6-07af-412a-a292-3017390e3560" containerID="e5673c888efec4003d377a7e50aedb0189aff98512dec9a2bd4c8692affb2895" exitCode=0 Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.238545 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerDied","Data":"e5673c888efec4003d377a7e50aedb0189aff98512dec9a2bd4c8692affb2895"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.281122 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerStarted","Data":"6012e8c7ffe5c43e3b6ccbd8a83d7496841e085328a911c092a8b8b81d7c9714"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.282547 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.282586 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.282625 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.319108 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" event={"ID":"fb4b9f38-d216-4744-ab57-18f45a0af4b9","Type":"ContainerStarted","Data":"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.319164 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" event={"ID":"fb4b9f38-d216-4744-ab57-18f45a0af4b9","Type":"ContainerStarted","Data":"1ed4b9c934a7d2cd9c178e34bfb3bb2fbe0d798158544d000cb460426e8c498b"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.319229 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerName="route-controller-manager" containerID="cri-o://ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e" gracePeriod=30 Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.319616 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.435507 4998 generic.go:358] "Generic (PLEG): container finished" podID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerID="f130dc470bf3b2dfcbb40d98cc0e069e8d10d14fa8f0a1ed96717044a0bba101" exitCode=0 Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.436479 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerDied","Data":"f130dc470bf3b2dfcbb40d98cc0e069e8d10d14fa8f0a1ed96717044a0bba101"} Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.504093 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" podStartSLOduration=21.504070918 podStartE2EDuration="21.504070918s" podCreationTimestamp="2025-12-08 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:52.429489113 +0000 UTC m=+196.077531803" watchObservedRunningTime="2025-12-08 18:54:52.504070918 +0000 UTC m=+196.152113608" Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.700189 4998 patch_prober.go:28] interesting pod/route-controller-manager-7bb66c6589-pcbw6 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:57486->10.217.0.56:8443: read: connection reset by peer" start-of-body= Dec 08 18:54:52 crc kubenswrapper[4998]: I1208 18:54:52.700278 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.56:8443/healthz\": read tcp 10.217.0.2:57486->10.217.0.56:8443: read: connection reset by peer" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.149678 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.190514 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7bb66c6589-pcbw6_fb4b9f38-d216-4744-ab57-18f45a0af4b9/route-controller-manager/0.log" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.190617 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.378126 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.384328 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerName="route-controller-manager" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.384372 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerName="route-controller-manager" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.384516 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerName="route-controller-manager" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.396673 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.430064 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert\") pod \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.430131 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca\") pod \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.430191 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6lb5x\" (UniqueName: \"kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x\") pod \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.430241 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp\") pod \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.430329 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config\") pod \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\" (UID: \"fb4b9f38-d216-4744-ab57-18f45a0af4b9\") " Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.432121 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.435273 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca" (OuterVolumeSpecName: "client-ca") pod "fb4b9f38-d216-4744-ab57-18f45a0af4b9" (UID: "fb4b9f38-d216-4744-ab57-18f45a0af4b9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.437978 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp" (OuterVolumeSpecName: "tmp") pod "fb4b9f38-d216-4744-ab57-18f45a0af4b9" (UID: "fb4b9f38-d216-4744-ab57-18f45a0af4b9"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.438388 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config" (OuterVolumeSpecName: "config") pod "fb4b9f38-d216-4744-ab57-18f45a0af4b9" (UID: "fb4b9f38-d216-4744-ab57-18f45a0af4b9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.462126 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x" (OuterVolumeSpecName: "kube-api-access-6lb5x") pod "fb4b9f38-d216-4744-ab57-18f45a0af4b9" (UID: "fb4b9f38-d216-4744-ab57-18f45a0af4b9"). InnerVolumeSpecName "kube-api-access-6lb5x". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.465821 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "fb4b9f38-d216-4744-ab57-18f45a0af4b9" (UID: "fb4b9f38-d216-4744-ab57-18f45a0af4b9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.524203 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539098 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgg6q\" (UniqueName: \"kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539139 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539182 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539219 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539263 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539371 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6lb5x\" (UniqueName: \"kubernetes.io/projected/fb4b9f38-d216-4744-ab57-18f45a0af4b9-kube-api-access-6lb5x\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539383 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/fb4b9f38-d216-4744-ab57-18f45a0af4b9-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539392 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539399 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fb4b9f38-d216-4744-ab57-18f45a0af4b9-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.539408 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/fb4b9f38-d216-4744-ab57-18f45a0af4b9-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.546320 4998 generic.go:358] "Generic (PLEG): container finished" podID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerID="60e38b463df93959398594b794a89e0592b09533ea3ef48a3f9c4460544c77df" exitCode=0 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.547439 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerDied","Data":"60e38b463df93959398594b794a89e0592b09533ea3ef48a3f9c4460544c77df"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.549790 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.565158 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-7bb66c6589-pcbw6_fb4b9f38-d216-4744-ab57-18f45a0af4b9/route-controller-manager/0.log" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.565393 4998 generic.go:358] "Generic (PLEG): container finished" podID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" containerID="ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e" exitCode=255 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.565572 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" event={"ID":"fb4b9f38-d216-4744-ab57-18f45a0af4b9","Type":"ContainerDied","Data":"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.565710 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" event={"ID":"fb4b9f38-d216-4744-ab57-18f45a0af4b9","Type":"ContainerDied","Data":"1ed4b9c934a7d2cd9c178e34bfb3bb2fbe0d798158544d000cb460426e8c498b"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.565785 4998 scope.go:117] "RemoveContainer" containerID="ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.566009 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.580998 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerStarted","Data":"cb2ac25e665e0985bdb93f2f399c987e6cb30701b503581452297a9e3de28565"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.638273 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerStarted","Data":"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.643810 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lgg6q\" (UniqueName: \"kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.643861 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.643908 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.644000 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.644488 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.645089 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.647884 4998 generic.go:358] "Generic (PLEG): container finished" podID="3b36276e-af0d-4657-912a-df7c533bf822" containerID="087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22" exitCode=0 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.647961 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerDied","Data":"087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.649727 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.651780 4998 generic.go:358] "Generic (PLEG): container finished" podID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerID="ff79729f9ceb39f269152e997c8a52cf739d23c0a636847040932880ccaa0ecd" exitCode=0 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.651875 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerDied","Data":"ff79729f9ceb39f269152e997c8a52cf739d23c0a636847040932880ccaa0ecd"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.656922 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerStarted","Data":"a9339bc74712657e17aa932f9d21c583c6068a279b1ed67c46f4e13416dc1b25"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.658782 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4449c7bb-a4da-495c-ace0-a06685eb0618","Type":"ContainerStarted","Data":"086d9f91ad380ecf5a29c7d950e94a5b5a4586c2505bfc6fa203cb2347ec647a"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.670605 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"b17b1a4a-1d39-41b9-a555-7059787fe36d","Type":"ContainerStarted","Data":"7bbc83ba5747ddf995c017daa22d564aacd547a540d18d8fa59ab07cbec58109"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.685204 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgg6q\" (UniqueName: \"kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q\") pod \"route-controller-manager-7dd4cddf49-t6ftm\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.688558 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerStarted","Data":"bde62bcf840cb54402b2101dfd0bd05d0398cff24cfd1e9682b1841cfd7ce456"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.700908 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zsnhs" podStartSLOduration=11.641041364 podStartE2EDuration="57.700891408s" podCreationTimestamp="2025-12-08 18:53:56 +0000 UTC" firstStartedPulling="2025-12-08 18:54:03.255760987 +0000 UTC m=+146.903803677" lastFinishedPulling="2025-12-08 18:54:49.315611031 +0000 UTC m=+192.963653721" observedRunningTime="2025-12-08 18:54:53.666650968 +0000 UTC m=+197.314693668" watchObservedRunningTime="2025-12-08 18:54:53.700891408 +0000 UTC m=+197.348934098" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.702111 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" event={"ID":"976eae66-182e-4714-adca-e8276f39ff21","Type":"ContainerStarted","Data":"80d5b1e0989a6f14b96bdf8f9ae9ddb66b73dc41a8c6a1a4dcbe98d31babaa4d"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.702395 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" podUID="976eae66-182e-4714-adca-e8276f39ff21" containerName="controller-manager" containerID="cri-o://80d5b1e0989a6f14b96bdf8f9ae9ddb66b73dc41a8c6a1a4dcbe98d31babaa4d" gracePeriod=30 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.703917 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.706261 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.707996 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bb66c6589-pcbw6"] Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.718935 4998 generic.go:358] "Generic (PLEG): container finished" podID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerID="1b7579a9dc13ba7bc6b599a04153703ae4fdfc30f74d92174b2c9f250e9a8fe0" exitCode=0 Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.719817 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerDied","Data":"1b7579a9dc13ba7bc6b599a04153703ae4fdfc30f74d92174b2c9f250e9a8fe0"} Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.723781 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.723820 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.730276 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=14.730257418 podStartE2EDuration="14.730257418s" podCreationTimestamp="2025-12-08 18:54:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:53.728333716 +0000 UTC m=+197.376376406" watchObservedRunningTime="2025-12-08 18:54:53.730257418 +0000 UTC m=+197.378300108" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.735243 4998 patch_prober.go:28] interesting pod/controller-manager-858f858449-fc8tl container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:47298->10.217.0.57:8443: read: connection reset by peer" start-of-body= Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.735323 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" podUID="976eae66-182e-4714-adca-e8276f39ff21" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.57:8443/healthz\": read tcp 10.217.0.2:47298->10.217.0.57:8443: read: connection reset by peer" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.747422 4998 scope.go:117] "RemoveContainer" containerID="ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e" Dec 08 18:54:53 crc kubenswrapper[4998]: E1208 18:54:53.747864 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e\": container with ID starting with ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e not found: ID does not exist" containerID="ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.747893 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e"} err="failed to get container status \"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e\": rpc error: code = NotFound desc = could not find container \"ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e\": container with ID starting with ddb07f44f5974c7311e0ba19837cdea7178169463343c101cedd71f57f84070e not found: ID does not exist" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.754999 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:53 crc kubenswrapper[4998]: I1208 18:54:53.835321 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8hjrw" podStartSLOduration=10.110468387 podStartE2EDuration="59.835306412s" podCreationTimestamp="2025-12-08 18:53:54 +0000 UTC" firstStartedPulling="2025-12-08 18:54:00.692469095 +0000 UTC m=+144.340511785" lastFinishedPulling="2025-12-08 18:54:50.41730712 +0000 UTC m=+194.065349810" observedRunningTime="2025-12-08 18:54:53.833304139 +0000 UTC m=+197.481346829" watchObservedRunningTime="2025-12-08 18:54:53.835306412 +0000 UTC m=+197.483349102" Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.197712 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" podStartSLOduration=23.197697197 podStartE2EDuration="23.197697197s" podCreationTimestamp="2025-12-08 18:54:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:54.195339543 +0000 UTC m=+197.843382233" watchObservedRunningTime="2025-12-08 18:54:54.197697197 +0000 UTC m=+197.845739887" Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.756381 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerStarted","Data":"b3b7ea1c2da8b63681292d6bc20a2da11eeba37194f57c01416705fcd3cbb4ff"} Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.759832 4998 generic.go:358] "Generic (PLEG): container finished" podID="4449c7bb-a4da-495c-ace0-a06685eb0618" containerID="086d9f91ad380ecf5a29c7d950e94a5b5a4586c2505bfc6fa203cb2347ec647a" exitCode=0 Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.759997 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4449c7bb-a4da-495c-ace0-a06685eb0618","Type":"ContainerDied","Data":"086d9f91ad380ecf5a29c7d950e94a5b5a4586c2505bfc6fa203cb2347ec647a"} Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.797589 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-858f858449-fc8tl_976eae66-182e-4714-adca-e8276f39ff21/controller-manager/0.log" Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.797646 4998 generic.go:358] "Generic (PLEG): container finished" podID="976eae66-182e-4714-adca-e8276f39ff21" containerID="80d5b1e0989a6f14b96bdf8f9ae9ddb66b73dc41a8c6a1a4dcbe98d31babaa4d" exitCode=255 Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.797749 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" event={"ID":"976eae66-182e-4714-adca-e8276f39ff21","Type":"ContainerDied","Data":"80d5b1e0989a6f14b96bdf8f9ae9ddb66b73dc41a8c6a1a4dcbe98d31babaa4d"} Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.867989 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerStarted","Data":"110cca7fae63e90b83e32912d38f4d637284f3e6177cf50550feba27b537908f"} Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.879146 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:54 crc kubenswrapper[4998]: I1208 18:54:54.879232 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.034356 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-cn9sm" podStartSLOduration=13.703905722 podStartE2EDuration="1m1.034329211s" podCreationTimestamp="2025-12-08 18:53:54 +0000 UTC" firstStartedPulling="2025-12-08 18:54:02.004217154 +0000 UTC m=+145.652259844" lastFinishedPulling="2025-12-08 18:54:49.334640633 +0000 UTC m=+192.982683333" observedRunningTime="2025-12-08 18:54:54.821846989 +0000 UTC m=+198.469889679" watchObservedRunningTime="2025-12-08 18:54:55.034329211 +0000 UTC m=+198.682371901" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.034702 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ljbvf" podStartSLOduration=12.694561936 podStartE2EDuration="1m1.034697012s" podCreationTimestamp="2025-12-08 18:53:54 +0000 UTC" firstStartedPulling="2025-12-08 18:54:02.084662105 +0000 UTC m=+145.732704795" lastFinishedPulling="2025-12-08 18:54:50.424797181 +0000 UTC m=+194.072839871" observedRunningTime="2025-12-08 18:54:54.977518084 +0000 UTC m=+198.625560774" watchObservedRunningTime="2025-12-08 18:54:55.034697012 +0000 UTC m=+198.682739702" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.388027 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fb4b9f38-d216-4744-ab57-18f45a0af4b9" path="/var/lib/kubelet/pods/fb4b9f38-d216-4744-ab57-18f45a0af4b9/volumes" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.615921 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.615964 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.692211 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.692269 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.692281 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.692293 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.857355 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:54:55 crc kubenswrapper[4998]: I1208 18:54:55.984195 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerStarted","Data":"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7"} Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.021109 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerStarted","Data":"cd9c3e386824c0b192c00174490bfea1b33f2f40b5f09ed93162137269c113ad"} Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.041852 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" event={"ID":"99b88ce0-fe99-4655-bc32-60693b30f558","Type":"ContainerStarted","Data":"12af97e032e39f21d33a3779a96d5b03a499f7fae5b1ee4e64a48c17edb49499"} Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.378107 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.378430 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.400731 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fhqvx" podStartSLOduration=11.916338333 podStartE2EDuration="1m0.40071357s" podCreationTimestamp="2025-12-08 18:53:56 +0000 UTC" firstStartedPulling="2025-12-08 18:54:01.932940963 +0000 UTC m=+145.580983653" lastFinishedPulling="2025-12-08 18:54:50.4173162 +0000 UTC m=+194.065358890" observedRunningTime="2025-12-08 18:54:56.399513338 +0000 UTC m=+200.047556028" watchObservedRunningTime="2025-12-08 18:54:56.40071357 +0000 UTC m=+200.048756260" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.476213 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-lcgg8" podStartSLOduration=12.629341615 podStartE2EDuration="1m2.4761948s" podCreationTimestamp="2025-12-08 18:53:54 +0000 UTC" firstStartedPulling="2025-12-08 18:54:00.711060555 +0000 UTC m=+144.359103245" lastFinishedPulling="2025-12-08 18:54:50.55791374 +0000 UTC m=+194.205956430" observedRunningTime="2025-12-08 18:54:56.475225384 +0000 UTC m=+200.123268094" watchObservedRunningTime="2025-12-08 18:54:56.4761948 +0000 UTC m=+200.124237490" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.485771 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.492078 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access\") pod \"4449c7bb-a4da-495c-ace0-a06685eb0618\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.492148 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir\") pod \"4449c7bb-a4da-495c-ace0-a06685eb0618\" (UID: \"4449c7bb-a4da-495c-ace0-a06685eb0618\") " Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.493031 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4449c7bb-a4da-495c-ace0-a06685eb0618" (UID: "4449c7bb-a4da-495c-ace0-a06685eb0618"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.512187 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4449c7bb-a4da-495c-ace0-a06685eb0618" (UID: "4449c7bb-a4da-495c-ace0-a06685eb0618"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.593367 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4449c7bb-a4da-495c-ace0-a06685eb0618-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:56 crc kubenswrapper[4998]: I1208 18:54:56.593412 4998 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4449c7bb-a4da-495c-ace0-a06685eb0618-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.033333 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.035534 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.055341 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"4449c7bb-a4da-495c-ace0-a06685eb0618","Type":"ContainerDied","Data":"bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43"} Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.055384 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bad03abc0bdad1d14225f9a6a93233ce92ca2e892f9c477230a3eb8ff0c81e43" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.055471 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.096737 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-858f858449-fc8tl_976eae66-182e-4714-adca-e8276f39ff21/controller-manager/0.log" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.097047 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.214853 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215558 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="976eae66-182e-4714-adca-e8276f39ff21" containerName="controller-manager" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215590 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="976eae66-182e-4714-adca-e8276f39ff21" containerName="controller-manager" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215609 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4449c7bb-a4da-495c-ace0-a06685eb0618" containerName="pruner" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215617 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4449c7bb-a4da-495c-ace0-a06685eb0618" containerName="pruner" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215756 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="4449c7bb-a4da-495c-ace0-a06685eb0618" containerName="pruner" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.215810 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="976eae66-182e-4714-adca-e8276f39ff21" containerName="controller-manager" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240264 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240338 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240369 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240414 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240445 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k4rht\" (UniqueName: \"kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.240576 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert\") pod \"976eae66-182e-4714-adca-e8276f39ff21\" (UID: \"976eae66-182e-4714-adca-e8276f39ff21\") " Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.247201 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config" (OuterVolumeSpecName: "config") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.247302 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp" (OuterVolumeSpecName: "tmp") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.247767 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.248065 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca" (OuterVolumeSpecName: "client-ca") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.257980 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.262852 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht" (OuterVolumeSpecName: "kube-api-access-k4rht") pod "976eae66-182e-4714-adca-e8276f39ff21" (UID: "976eae66-182e-4714-adca-e8276f39ff21"). InnerVolumeSpecName "kube-api-access-k4rht". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342082 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342117 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342125 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/976eae66-182e-4714-adca-e8276f39ff21-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342136 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/976eae66-182e-4714-adca-e8276f39ff21-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342144 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k4rht\" (UniqueName: \"kubernetes.io/projected/976eae66-182e-4714-adca-e8276f39ff21-kube-api-access-k4rht\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.342154 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/976eae66-182e-4714-adca-e8276f39ff21-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.377906 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-cn9sm" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:57 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:57 crc kubenswrapper[4998]: > Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.378314 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-8hjrw" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:57 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:57 crc kubenswrapper[4998]: > Dec 08 18:54:57 crc kubenswrapper[4998]: I1208 18:54:57.523301 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-ljbvf" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:57 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:57 crc kubenswrapper[4998]: > Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.077549 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-858f858449-fc8tl_976eae66-182e-4714-adca-e8276f39ff21/controller-manager/0.log" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.283527 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-fhqvx" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="registry-server" probeResult="failure" output=< Dec 08 18:54:58 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:54:58 crc kubenswrapper[4998]: > Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.314274 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.314359 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.314568 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.316500 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.317881 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.359141 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.359219 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.359999 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.360778 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.361665 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.361809 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.366170 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-858f858449-fc8tl" event={"ID":"976eae66-182e-4714-adca-e8276f39ff21","Type":"ContainerDied","Data":"6167d92d3bdf04deb75949f4a164288144f6874dd570ae73d1f1080ca904dcee"} Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.366218 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.367725 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.370952 4998 scope.go:117] "RemoveContainer" containerID="80d5b1e0989a6f14b96bdf8f9ae9ddb66b73dc41a8c6a1a4dcbe98d31babaa4d" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472614 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472673 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472737 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472763 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w97fb\" (UniqueName: \"kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472791 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.472853 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.572607 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.574838 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.574930 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.575024 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.575117 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.575182 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.575213 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w97fb\" (UniqueName: \"kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.578917 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.580464 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.582285 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.582991 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.583733 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.589269 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-858f858449-fc8tl"] Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.617657 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w97fb\" (UniqueName: \"kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb\") pod \"controller-manager-796b568864-9drcj\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:58 crc kubenswrapper[4998]: I1208 18:54:58.683853 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.201797 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" event={"ID":"99b88ce0-fe99-4655-bc32-60693b30f558","Type":"ContainerStarted","Data":"4ee0efffc6a421f82c3078582811d58ca320b7ad60e3deaef267ebffdfe418ee"} Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.202128 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.210658 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.334927 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" podStartSLOduration=8.334911445 podStartE2EDuration="8.334911445s" podCreationTimestamp="2025-12-08 18:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:54:59.270014001 +0000 UTC m=+202.918056691" watchObservedRunningTime="2025-12-08 18:54:59.334911445 +0000 UTC m=+202.982954135" Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.430585 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="976eae66-182e-4714-adca-e8276f39ff21" path="/var/lib/kubelet/pods/976eae66-182e-4714-adca-e8276f39ff21/volumes" Dec 08 18:54:59 crc kubenswrapper[4998]: I1208 18:54:59.693088 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:55:00 crc kubenswrapper[4998]: I1208 18:55:00.014398 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:55:00 crc kubenswrapper[4998]: I1208 18:55:00.415282 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.251142 4998 generic.go:358] "Generic (PLEG): container finished" podID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerID="a9339bc74712657e17aa932f9d21c583c6068a279b1ed67c46f4e13416dc1b25" exitCode=0 Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.251567 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerDied","Data":"a9339bc74712657e17aa932f9d21c583c6068a279b1ed67c46f4e13416dc1b25"} Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.257061 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" event={"ID":"4ab1b8a6-376a-473b-a022-fb5e80025482","Type":"ContainerStarted","Data":"eaac335f0d69ea85adc0c43c52c4ac4f76bcca866ec093ff73848835e6654f00"} Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.258799 4998 generic.go:358] "Generic (PLEG): container finished" podID="3af11570-35c5-4991-ae53-bfd38cdea120" containerID="b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f" exitCode=0 Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.258884 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerDied","Data":"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f"} Dec 08 18:55:01 crc kubenswrapper[4998]: I1208 18:55:01.781193 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zsnhs" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="registry-server" containerID="cri-o://cb2ac25e665e0985bdb93f2f399c987e6cb30701b503581452297a9e3de28565" gracePeriod=2 Dec 08 18:55:02 crc kubenswrapper[4998]: I1208 18:55:02.275438 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" event={"ID":"4ab1b8a6-376a-473b-a022-fb5e80025482","Type":"ContainerStarted","Data":"ebc68a28fbd03f87cbf9e15f8372a1818637a600a152c315392e8ff7190ce6c3"} Dec 08 18:55:02 crc kubenswrapper[4998]: I1208 18:55:02.276649 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:55:02 crc kubenswrapper[4998]: I1208 18:55:02.278221 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerStarted","Data":"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306"} Dec 08 18:55:02 crc kubenswrapper[4998]: I1208 18:55:02.823943 4998 ???:1] "http: TLS handshake error from 192.168.126.11:36988: no serving certificate available for the kubelet" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.276935 4998 patch_prober.go:28] interesting pod/controller-manager-796b568864-9drcj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.277461 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.425657 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerStarted","Data":"7be491ba98710a5a38421a1bec85ea990fef9a8d5b965202ef79df4980e87ac6"} Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.428152 4998 generic.go:358] "Generic (PLEG): container finished" podID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerID="cb2ac25e665e0985bdb93f2f399c987e6cb30701b503581452297a9e3de28565" exitCode=0 Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.429192 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerDied","Data":"cb2ac25e665e0985bdb93f2f399c987e6cb30701b503581452297a9e3de28565"} Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.476325 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lctrl" podStartSLOduration=18.13345921 podStartE2EDuration="1m5.47631003s" podCreationTimestamp="2025-12-08 18:53:58 +0000 UTC" firstStartedPulling="2025-12-08 18:54:03.223877578 +0000 UTC m=+146.871920268" lastFinishedPulling="2025-12-08 18:54:50.566728398 +0000 UTC m=+194.214771088" observedRunningTime="2025-12-08 18:55:03.468331375 +0000 UTC m=+207.116374085" watchObservedRunningTime="2025-12-08 18:55:03.47631003 +0000 UTC m=+207.124352720" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.476473 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" podStartSLOduration=12.476468474 podStartE2EDuration="12.476468474s" podCreationTimestamp="2025-12-08 18:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:02.326162544 +0000 UTC m=+205.974205234" watchObservedRunningTime="2025-12-08 18:55:03.476468474 +0000 UTC m=+207.124511164" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.600128 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s4gnq" podStartSLOduration=20.288512573 podStartE2EDuration="1m6.600110718s" podCreationTimestamp="2025-12-08 18:53:57 +0000 UTC" firstStartedPulling="2025-12-08 18:54:03.27552182 +0000 UTC m=+146.923564510" lastFinishedPulling="2025-12-08 18:54:49.587119965 +0000 UTC m=+193.235162655" observedRunningTime="2025-12-08 18:55:03.59981919 +0000 UTC m=+207.247861880" watchObservedRunningTime="2025-12-08 18:55:03.600110718 +0000 UTC m=+207.248153408" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.688238 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.973316 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.980893 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities\") pod \"59c1b82c-8ef2-4836-b72a-f603cfa44002\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.980978 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content\") pod \"59c1b82c-8ef2-4836-b72a-f603cfa44002\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.981000 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55mcl\" (UniqueName: \"kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl\") pod \"59c1b82c-8ef2-4836-b72a-f603cfa44002\" (UID: \"59c1b82c-8ef2-4836-b72a-f603cfa44002\") " Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.982922 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities" (OuterVolumeSpecName: "utilities") pod "59c1b82c-8ef2-4836-b72a-f603cfa44002" (UID: "59c1b82c-8ef2-4836-b72a-f603cfa44002"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:03 crc kubenswrapper[4998]: I1208 18:55:03.989579 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl" (OuterVolumeSpecName: "kube-api-access-55mcl") pod "59c1b82c-8ef2-4836-b72a-f603cfa44002" (UID: "59c1b82c-8ef2-4836-b72a-f603cfa44002"). InnerVolumeSpecName "kube-api-access-55mcl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.013867 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "59c1b82c-8ef2-4836-b72a-f603cfa44002" (UID: "59c1b82c-8ef2-4836-b72a-f603cfa44002"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.082814 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.082861 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-55mcl\" (UniqueName: \"kubernetes.io/projected/59c1b82c-8ef2-4836-b72a-f603cfa44002-kube-api-access-55mcl\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.082880 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/59c1b82c-8ef2-4836-b72a-f603cfa44002-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.436300 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsnhs" event={"ID":"59c1b82c-8ef2-4836-b72a-f603cfa44002","Type":"ContainerDied","Data":"051b11639e4125a479ead239ed52f9a4efd47baf32634901defb6335ac0fc9f9"} Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.436296 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsnhs" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.436359 4998 scope.go:117] "RemoveContainer" containerID="cb2ac25e665e0985bdb93f2f399c987e6cb30701b503581452297a9e3de28565" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.453505 4998 scope.go:117] "RemoveContainer" containerID="f130dc470bf3b2dfcbb40d98cc0e069e8d10d14fa8f0a1ed96717044a0bba101" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.470139 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.494269 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsnhs"] Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.634154 4998 scope.go:117] "RemoveContainer" containerID="ea2e97e387ed5c73b55207fa98f7436ccb693d2aeae852e9bd74b915582caca9" Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.879700 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:04 crc kubenswrapper[4998]: I1208 18:55:04.879780 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.412627 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" path="/var/lib/kubelet/pods/59c1b82c-8ef2-4836-b72a-f603cfa44002/volumes" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.547624 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.547714 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.672310 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.695530 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.747493 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.752607 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:55:05 crc kubenswrapper[4998]: I1208 18:55:05.772375 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:55:06 crc kubenswrapper[4998]: I1208 18:55:06.377119 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:06 crc kubenswrapper[4998]: I1208 18:55:06.377189 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:06 crc kubenswrapper[4998]: I1208 18:55:06.407275 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:55:06 crc kubenswrapper[4998]: I1208 18:55:06.420127 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:55:06 crc kubenswrapper[4998]: I1208 18:55:06.542061 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:55:07 crc kubenswrapper[4998]: I1208 18:55:07.082388 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:55:07 crc kubenswrapper[4998]: I1208 18:55:07.209045 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.094028 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.095938 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8hjrw" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="registry-server" containerID="cri-o://bde62bcf840cb54402b2101dfd0bd05d0398cff24cfd1e9682b1841cfd7ce456" gracePeriod=2 Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.551197 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.551234 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.595797 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:08 crc kubenswrapper[4998]: I1208 18:55:08.595839 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:09 crc kubenswrapper[4998]: I1208 18:55:09.992311 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-s4gnq" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="registry-server" probeResult="failure" output=< Dec 08 18:55:09 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:55:09 crc kubenswrapper[4998]: > Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.012524 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lctrl" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="registry-server" probeResult="failure" output=< Dec 08 18:55:10 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 18:55:10 crc kubenswrapper[4998]: > Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.055641 4998 generic.go:358] "Generic (PLEG): container finished" podID="fee47ac6-07af-412a-a292-3017390e3560" containerID="bde62bcf840cb54402b2101dfd0bd05d0398cff24cfd1e9682b1841cfd7ce456" exitCode=0 Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.055798 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerDied","Data":"bde62bcf840cb54402b2101dfd0bd05d0398cff24cfd1e9682b1841cfd7ce456"} Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.113251 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.113566 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-cn9sm" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="registry-server" containerID="cri-o://b3b7ea1c2da8b63681292d6bc20a2da11eeba37194f57c01416705fcd3cbb4ff" gracePeriod=2 Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.213111 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.500881 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4q6n\" (UniqueName: \"kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n\") pod \"fee47ac6-07af-412a-a292-3017390e3560\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.501494 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities\") pod \"fee47ac6-07af-412a-a292-3017390e3560\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.501614 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content\") pod \"fee47ac6-07af-412a-a292-3017390e3560\" (UID: \"fee47ac6-07af-412a-a292-3017390e3560\") " Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.505605 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities" (OuterVolumeSpecName: "utilities") pod "fee47ac6-07af-412a-a292-3017390e3560" (UID: "fee47ac6-07af-412a-a292-3017390e3560"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.530294 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n" (OuterVolumeSpecName: "kube-api-access-d4q6n") pod "fee47ac6-07af-412a-a292-3017390e3560" (UID: "fee47ac6-07af-412a-a292-3017390e3560"). InnerVolumeSpecName "kube-api-access-d4q6n". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.532548 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fee47ac6-07af-412a-a292-3017390e3560" (UID: "fee47ac6-07af-412a-a292-3017390e3560"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.602752 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.602793 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fee47ac6-07af-412a-a292-3017390e3560-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:10 crc kubenswrapper[4998]: I1208 18:55:10.602835 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4q6n\" (UniqueName: \"kubernetes.io/projected/fee47ac6-07af-412a-a292-3017390e3560-kube-api-access-d4q6n\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.068203 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8hjrw" event={"ID":"fee47ac6-07af-412a-a292-3017390e3560","Type":"ContainerDied","Data":"e28d25a1d5c44196ac39b27ab9a6784d17beb1d7cc9c89db4597d5cdc3778757"} Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.068287 4998 scope.go:117] "RemoveContainer" containerID="bde62bcf840cb54402b2101dfd0bd05d0398cff24cfd1e9682b1841cfd7ce456" Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.068496 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8hjrw" Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.300562 4998 scope.go:117] "RemoveContainer" containerID="e5673c888efec4003d377a7e50aedb0189aff98512dec9a2bd4c8692affb2895" Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.429549 4998 scope.go:117] "RemoveContainer" containerID="66c1d82dc6b184a5caf09206bf3a5d2a79052387435caaa2351429aeda2c1ed8" Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.516179 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.517989 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8hjrw"] Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.583080 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.583356 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" containerID="cri-o://ebc68a28fbd03f87cbf9e15f8372a1818637a600a152c315392e8ff7190ce6c3" gracePeriod=30 Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.601993 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:55:11 crc kubenswrapper[4998]: I1208 18:55:11.602264 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" podUID="99b88ce0-fe99-4655-bc32-60693b30f558" containerName="route-controller-manager" containerID="cri-o://4ee0efffc6a421f82c3078582811d58ca320b7ad60e3deaef267ebffdfe418ee" gracePeriod=30 Dec 08 18:55:12 crc kubenswrapper[4998]: I1208 18:55:12.339217 4998 generic.go:358] "Generic (PLEG): container finished" podID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerID="b3b7ea1c2da8b63681292d6bc20a2da11eeba37194f57c01416705fcd3cbb4ff" exitCode=0 Dec 08 18:55:12 crc kubenswrapper[4998]: I1208 18:55:12.339434 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerDied","Data":"b3b7ea1c2da8b63681292d6bc20a2da11eeba37194f57c01416705fcd3cbb4ff"} Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.163279 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.276254 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content\") pod \"4fca7730-4bcb-49ee-af38-a694d0f0438a\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.276779 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsxbz\" (UniqueName: \"kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz\") pod \"4fca7730-4bcb-49ee-af38-a694d0f0438a\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.276914 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities\") pod \"4fca7730-4bcb-49ee-af38-a694d0f0438a\" (UID: \"4fca7730-4bcb-49ee-af38-a694d0f0438a\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.279531 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities" (OuterVolumeSpecName: "utilities") pod "4fca7730-4bcb-49ee-af38-a694d0f0438a" (UID: "4fca7730-4bcb-49ee-af38-a694d0f0438a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.282180 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz" (OuterVolumeSpecName: "kube-api-access-wsxbz") pod "4fca7730-4bcb-49ee-af38-a694d0f0438a" (UID: "4fca7730-4bcb-49ee-af38-a694d0f0438a"). InnerVolumeSpecName "kube-api-access-wsxbz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.349554 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4fca7730-4bcb-49ee-af38-a694d0f0438a" (UID: "4fca7730-4bcb-49ee-af38-a694d0f0438a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.355956 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-cn9sm" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.356008 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-cn9sm" event={"ID":"4fca7730-4bcb-49ee-af38-a694d0f0438a","Type":"ContainerDied","Data":"b3e3df987b26140b02c1c69425652a61f5b7fa8e1269e2c9c09b4a59c8976728"} Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.356420 4998 scope.go:117] "RemoveContainer" containerID="b3b7ea1c2da8b63681292d6bc20a2da11eeba37194f57c01416705fcd3cbb4ff" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.357949 4998 generic.go:358] "Generic (PLEG): container finished" podID="99b88ce0-fe99-4655-bc32-60693b30f558" containerID="4ee0efffc6a421f82c3078582811d58ca320b7ad60e3deaef267ebffdfe418ee" exitCode=0 Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.358083 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" event={"ID":"99b88ce0-fe99-4655-bc32-60693b30f558","Type":"ContainerDied","Data":"4ee0efffc6a421f82c3078582811d58ca320b7ad60e3deaef267ebffdfe418ee"} Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.361002 4998 generic.go:358] "Generic (PLEG): container finished" podID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerID="ebc68a28fbd03f87cbf9e15f8372a1818637a600a152c315392e8ff7190ce6c3" exitCode=0 Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.361084 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" event={"ID":"4ab1b8a6-376a-473b-a022-fb5e80025482","Type":"ContainerDied","Data":"ebc68a28fbd03f87cbf9e15f8372a1818637a600a152c315392e8ff7190ce6c3"} Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.372376 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fee47ac6-07af-412a-a292-3017390e3560" path="/var/lib/kubelet/pods/fee47ac6-07af-412a-a292-3017390e3560/volumes" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.378242 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wsxbz\" (UniqueName: \"kubernetes.io/projected/4fca7730-4bcb-49ee-af38-a694d0f0438a-kube-api-access-wsxbz\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.378276 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.378285 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4fca7730-4bcb-49ee-af38-a694d0f0438a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.379891 4998 scope.go:117] "RemoveContainer" containerID="ff79729f9ceb39f269152e997c8a52cf739d23c0a636847040932880ccaa0ecd" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.391437 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.394193 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-cn9sm"] Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.421940 4998 scope.go:117] "RemoveContainer" containerID="0ae060c227b04ff7fab9cad6fcdaff9cdda6bfe9336345b01293e3650f1e35df" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.655326 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688059 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-676c7bbc99-xk57m"] Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688861 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688890 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688899 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688906 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688921 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688932 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688955 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.688962 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689000 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689013 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689037 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689044 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689061 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689067 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689084 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689093 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689111 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689118 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="extract-content" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689127 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689139 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="extract-utilities" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689266 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="59c1b82c-8ef2-4836-b72a-f603cfa44002" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689282 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689296 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.689312 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fee47ac6-07af-412a-a292-3017390e3560" containerName="registry-server" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.763950 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-676c7bbc99-xk57m"] Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.764177 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.782878 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w97fb\" (UniqueName: \"kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.782985 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.783082 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.783125 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.783190 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.783224 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca\") pod \"4ab1b8a6-376a-473b-a022-fb5e80025482\" (UID: \"4ab1b8a6-376a-473b-a022-fb5e80025482\") " Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.784291 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca" (OuterVolumeSpecName: "client-ca") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.789410 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp" (OuterVolumeSpecName: "tmp") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.790023 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config" (OuterVolumeSpecName: "config") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.790190 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.795217 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb" (OuterVolumeSpecName: "kube-api-access-w97fb") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "kube-api-access-w97fb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.797038 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4ab1b8a6-376a-473b-a022-fb5e80025482" (UID: "4ab1b8a6-376a-473b-a022-fb5e80025482"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892474 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892581 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892644 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892742 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892820 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5kx\" (UniqueName: \"kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892902 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.892976 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ab1b8a6-376a-473b-a022-fb5e80025482-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.893009 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4ab1b8a6-376a-473b-a022-fb5e80025482-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.893035 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.893064 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w97fb\" (UniqueName: \"kubernetes.io/projected/4ab1b8a6-376a-473b-a022-fb5e80025482-kube-api-access-w97fb\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.893081 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.893094 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ab1b8a6-376a-473b-a022-fb5e80025482-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994449 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5kx\" (UniqueName: \"kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994541 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994615 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994675 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994783 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.994824 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.996272 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:13 crc kubenswrapper[4998]: I1208 18:55:13.997262 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.097619 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.098798 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.111522 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.161951 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5kx\" (UniqueName: \"kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx\") pod \"controller-manager-676c7bbc99-xk57m\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.219955 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.253225 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.254003 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="99b88ce0-fe99-4655-bc32-60693b30f558" containerName="route-controller-manager" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.254027 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="99b88ce0-fe99-4655-bc32-60693b30f558" containerName="route-controller-manager" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.254166 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="99b88ce0-fe99-4655-bc32-60693b30f558" containerName="route-controller-manager" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.487726 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.490824 4998 patch_prober.go:28] interesting pod/controller-manager-796b568864-9drcj container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.490885 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.61:8443/healthz\": context deadline exceeded" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.492173 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp\") pod \"99b88ce0-fe99-4655-bc32-60693b30f558\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.492237 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config\") pod \"99b88ce0-fe99-4655-bc32-60693b30f558\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.492268 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgg6q\" (UniqueName: \"kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q\") pod \"99b88ce0-fe99-4655-bc32-60693b30f558\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.492291 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca\") pod \"99b88ce0-fe99-4655-bc32-60693b30f558\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.492316 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert\") pod \"99b88ce0-fe99-4655-bc32-60693b30f558\" (UID: \"99b88ce0-fe99-4655-bc32-60693b30f558\") " Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.498710 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config" (OuterVolumeSpecName: "config") pod "99b88ce0-fe99-4655-bc32-60693b30f558" (UID: "99b88ce0-fe99-4655-bc32-60693b30f558"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.501899 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp" (OuterVolumeSpecName: "tmp") pod "99b88ce0-fe99-4655-bc32-60693b30f558" (UID: "99b88ce0-fe99-4655-bc32-60693b30f558"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.502123 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.502194 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca" (OuterVolumeSpecName: "client-ca") pod "99b88ce0-fe99-4655-bc32-60693b30f558" (UID: "99b88ce0-fe99-4655-bc32-60693b30f558"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.502284 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.511027 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.511143 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm" event={"ID":"99b88ce0-fe99-4655-bc32-60693b30f558","Type":"ContainerDied","Data":"12af97e032e39f21d33a3779a96d5b03a499f7fae5b1ee4e64a48c17edb49499"} Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.511171 4998 scope.go:117] "RemoveContainer" containerID="4ee0efffc6a421f82c3078582811d58ca320b7ad60e3deaef267ebffdfe418ee" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.518359 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q" (OuterVolumeSpecName: "kube-api-access-lgg6q") pod "99b88ce0-fe99-4655-bc32-60693b30f558" (UID: "99b88ce0-fe99-4655-bc32-60693b30f558"). InnerVolumeSpecName "kube-api-access-lgg6q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.519168 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "99b88ce0-fe99-4655-bc32-60693b30f558" (UID: "99b88ce0-fe99-4655-bc32-60693b30f558"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.519480 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" event={"ID":"4ab1b8a6-376a-473b-a022-fb5e80025482","Type":"ContainerDied","Data":"eaac335f0d69ea85adc0c43c52c4ac4f76bcca866ec093ff73848835e6654f00"} Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.519830 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-796b568864-9drcj" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.543882 4998 scope.go:117] "RemoveContainer" containerID="ebc68a28fbd03f87cbf9e15f8372a1818637a600a152c315392e8ff7190ce6c3" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.573844 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.575127 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-796b568864-9drcj"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594172 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594254 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fpxdh\" (UniqueName: \"kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594296 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594319 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594338 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594382 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/99b88ce0-fe99-4655-bc32-60693b30f558-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594392 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594401 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lgg6q\" (UniqueName: \"kubernetes.io/projected/99b88ce0-fe99-4655-bc32-60693b30f558-kube-api-access-lgg6q\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594410 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/99b88ce0-fe99-4655-bc32-60693b30f558-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.594422 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/99b88ce0-fe99-4655-bc32-60693b30f558-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.695588 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.695635 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fpxdh\" (UniqueName: \"kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.695668 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.695702 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.695721 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.696638 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.697600 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.697749 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.703403 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.715035 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fpxdh\" (UniqueName: \"kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh\") pod \"route-controller-manager-85454d5c6-78tdb\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.804440 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-676c7bbc99-xk57m"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.851154 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.880560 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.880620 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.906498 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:55:14 crc kubenswrapper[4998]: I1208 18:55:14.909749 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7dd4cddf49-t6ftm"] Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.375827 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ab1b8a6-376a-473b-a022-fb5e80025482" path="/var/lib/kubelet/pods/4ab1b8a6-376a-473b-a022-fb5e80025482/volumes" Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.376505 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fca7730-4bcb-49ee-af38-a694d0f0438a" path="/var/lib/kubelet/pods/4fca7730-4bcb-49ee-af38-a694d0f0438a/volumes" Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.377292 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99b88ce0-fe99-4655-bc32-60693b30f558" path="/var/lib/kubelet/pods/99b88ce0-fe99-4655-bc32-60693b30f558/volumes" Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.468138 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb"] Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.582576 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" event={"ID":"51c3121f-25f2-4ca4-8d6e-085650249cc0","Type":"ContainerStarted","Data":"fdf3fd17f275ff2d13e0a88bfd94ce98b573eee9ec3917d00b503c711addd88a"} Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.586192 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" event={"ID":"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8","Type":"ContainerStarted","Data":"d9af25cb52c53430761cfb600f61bd42bbdb677ed958250829c1a4f92d731693"} Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.586265 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" event={"ID":"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8","Type":"ContainerStarted","Data":"5e507d6ccf7512a99410c208eee91eb111a8019ed54e23f4bdb8ab5e09f8b0de"} Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.587003 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.741367 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:15 crc kubenswrapper[4998]: I1208 18:55:15.762900 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" podStartSLOduration=4.762877589 podStartE2EDuration="4.762877589s" podCreationTimestamp="2025-12-08 18:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:15.616071971 +0000 UTC m=+219.264114681" watchObservedRunningTime="2025-12-08 18:55:15.762877589 +0000 UTC m=+219.410920279" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381013 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381081 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381123 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381569 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381618 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381575 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"6012e8c7ffe5c43e3b6ccbd8a83d7496841e085328a911c092a8b8b81d7c9714"} pod="openshift-console/downloads-747b44746d-ln56w" containerMessage="Container download-server failed liveness probe, will be restarted" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.381721 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" containerID="cri-o://6012e8c7ffe5c43e3b6ccbd8a83d7496841e085328a911c092a8b8b81d7c9714" gracePeriod=2 Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.596948 4998 generic.go:358] "Generic (PLEG): container finished" podID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerID="6012e8c7ffe5c43e3b6ccbd8a83d7496841e085328a911c092a8b8b81d7c9714" exitCode=0 Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.597028 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerDied","Data":"6012e8c7ffe5c43e3b6ccbd8a83d7496841e085328a911c092a8b8b81d7c9714"} Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.597312 4998 scope.go:117] "RemoveContainer" containerID="d89c7ac1b876088203fe57e3ecb61f9283414f96af11a7859f753f75a7c64672" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.601512 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" event={"ID":"51c3121f-25f2-4ca4-8d6e-085650249cc0","Type":"ContainerStarted","Data":"ebe7147dbe2d93dac0a13e503551f5bdd2cafb758921deef3dc012f044f250eb"} Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.601775 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.606848 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:16 crc kubenswrapper[4998]: I1208 18:55:16.619070 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" podStartSLOduration=5.619052029 podStartE2EDuration="5.619052029s" podCreationTimestamp="2025-12-08 18:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:16.615049232 +0000 UTC m=+220.263091922" watchObservedRunningTime="2025-12-08 18:55:16.619052029 +0000 UTC m=+220.267094709" Dec 08 18:55:17 crc kubenswrapper[4998]: I1208 18:55:17.618253 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-ln56w" event={"ID":"0f532410-7407-41fe-b95e-d1a785d4ebfe","Type":"ContainerStarted","Data":"31b557e39acd34f57d3a2d0183e51154b7dccdecd5ffd10dd6eefedaae43e3d4"} Dec 08 18:55:17 crc kubenswrapper[4998]: I1208 18:55:17.619008 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:55:17 crc kubenswrapper[4998]: I1208 18:55:17.619075 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:17 crc kubenswrapper[4998]: I1208 18:55:17.619112 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.376346 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" containerID="cri-o://ce8a089c081b029afdb8ad93192f94c8cc2fa0328e0a209e429782944d86a4a2" gracePeriod=15 Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.598498 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.630993 4998 generic.go:358] "Generic (PLEG): container finished" podID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerID="ce8a089c081b029afdb8ad93192f94c8cc2fa0328e0a209e429782944d86a4a2" exitCode=0 Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.631863 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" event={"ID":"a9359b08-b878-4a61-b612-0d51c03b3e8d","Type":"ContainerDied","Data":"ce8a089c081b029afdb8ad93192f94c8cc2fa0328e0a209e429782944d86a4a2"} Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.632205 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.632253 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.931321 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.955426 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:55:18 crc kubenswrapper[4998]: I1208 18:55:18.996708 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.109224 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.139080 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-c5748848d-pn8m5"] Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.139815 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.139840 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.139999 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" containerName="oauth-openshift" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.213075 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.213421 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.213486 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.213585 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.213557 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.214709 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215031 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215075 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215117 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215146 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215198 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215228 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215262 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215291 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215342 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kv2pr\" (UniqueName: \"kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215550 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session\") pod \"a9359b08-b878-4a61-b612-0d51c03b3e8d\" (UID: \"a9359b08-b878-4a61-b612-0d51c03b3e8d\") " Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215635 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.216075 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.216133 4998 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.215646 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.216159 4998 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-audit-policies\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.216175 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.222957 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.223442 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.223622 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.223921 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.225749 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.226465 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr" (OuterVolumeSpecName: "kube-api-access-kv2pr") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "kube-api-access-kv2pr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.226910 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.227285 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.233286 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "a9359b08-b878-4a61-b612-0d51c03b3e8d" (UID: "a9359b08-b878-4a61-b612-0d51c03b3e8d"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317317 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317372 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317385 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317398 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317414 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317426 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kv2pr\" (UniqueName: \"kubernetes.io/projected/a9359b08-b878-4a61-b612-0d51c03b3e8d-kube-api-access-kv2pr\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317440 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317454 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317468 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317481 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:19 crc kubenswrapper[4998]: I1208 18:55:19.317496 4998 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/a9359b08-b878-4a61-b612-0d51c03b3e8d-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.154971 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.158008 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-ncn97" event={"ID":"a9359b08-b878-4a61-b612-0d51c03b3e8d","Type":"ContainerDied","Data":"d6999630ee20ea341c737c92541fbbe912bce3093755e252b802a8d41b1035df"} Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.158117 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c5748848d-pn8m5"] Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.158472 4998 scope.go:117] "RemoveContainer" containerID="ce8a089c081b029afdb8ad93192f94c8cc2fa0328e0a209e429782944d86a4a2" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.158801 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.161007 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.161157 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.173821 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.173854 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.174544 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.174611 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.174911 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175112 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175229 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175498 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175658 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175673 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175725 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175775 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.175846 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.192675 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.902773 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.902867 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-login\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.902922 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-policies\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.902965 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.902989 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903012 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903058 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-error\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903122 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-session\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903159 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903247 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-dir\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903285 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903304 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903329 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbdb2\" (UniqueName: \"kubernetes.io/projected/54363aa9-5d10-4b28-96a8-a766ef3395b6-kube-api-access-tbdb2\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.903350 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.961261 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.987916 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:55:20 crc kubenswrapper[4998]: I1208 18:55:20.990889 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-ncn97"] Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.031892 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-login\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.031962 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-policies\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.031998 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.032024 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.032042 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.032092 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-error\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.032138 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-session\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033338 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033526 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-dir\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033603 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033647 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033663 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tbdb2\" (UniqueName: \"kubernetes.io/projected/54363aa9-5d10-4b28-96a8-a766ef3395b6-kube-api-access-tbdb2\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033718 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.033754 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.034468 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-policies\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.035261 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.035915 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54363aa9-5d10-4b28-96a8-a766ef3395b6-audit-dir\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.035964 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-service-ca\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.036929 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-cliconfig\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.037637 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-session\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.038298 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-login\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.038952 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-serving-cert\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.043522 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-error\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.050394 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.050739 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.050915 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-router-certs\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.052815 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/54363aa9-5d10-4b28-96a8-a766ef3395b6-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.054932 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tbdb2\" (UniqueName: \"kubernetes.io/projected/54363aa9-5d10-4b28-96a8-a766ef3395b6-kube-api-access-tbdb2\") pod \"oauth-openshift-c5748848d-pn8m5\" (UID: \"54363aa9-5d10-4b28-96a8-a766ef3395b6\") " pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: E1208 18:55:21.058883 4998 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda9359b08_b878_4a61_b612_0d51c03b3e8d.slice\": RecentStats: unable to find data in memory cache]" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.079641 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.611162 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9359b08-b878-4a61-b612-0d51c03b3e8d" path="/var/lib/kubelet/pods/a9359b08-b878-4a61-b612-0d51c03b3e8d/volumes" Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.623004 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.623331 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lctrl" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="registry-server" containerID="cri-o://7be491ba98710a5a38421a1bec85ea990fef9a8d5b965202ef79df4980e87ac6" gracePeriod=2 Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.926256 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-c5748848d-pn8m5"] Dec 08 18:55:21 crc kubenswrapper[4998]: W1208 18:55:21.934114 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod54363aa9_5d10_4b28_96a8_a766ef3395b6.slice/crio-9483e5368b6434db6b101259989ecee719b410ce06f0d4cb9788504326c2a298 WatchSource:0}: Error finding container 9483e5368b6434db6b101259989ecee719b410ce06f0d4cb9788504326c2a298: Status 404 returned error can't find the container with id 9483e5368b6434db6b101259989ecee719b410ce06f0d4cb9788504326c2a298 Dec 08 18:55:21 crc kubenswrapper[4998]: I1208 18:55:21.969931 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" event={"ID":"54363aa9-5d10-4b28-96a8-a766ef3395b6","Type":"ContainerStarted","Data":"9483e5368b6434db6b101259989ecee719b410ce06f0d4cb9788504326c2a298"} Dec 08 18:55:24 crc kubenswrapper[4998]: I1208 18:55:24.998463 4998 generic.go:358] "Generic (PLEG): container finished" podID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerID="7be491ba98710a5a38421a1bec85ea990fef9a8d5b965202ef79df4980e87ac6" exitCode=0 Dec 08 18:55:24 crc kubenswrapper[4998]: I1208 18:55:24.999044 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerDied","Data":"7be491ba98710a5a38421a1bec85ea990fef9a8d5b965202ef79df4980e87ac6"} Dec 08 18:55:26 crc kubenswrapper[4998]: I1208 18:55:26.446158 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:26 crc kubenswrapper[4998]: I1208 18:55:26.447255 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:27 crc kubenswrapper[4998]: I1208 18:55:27.481117 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" event={"ID":"54363aa9-5d10-4b28-96a8-a766ef3395b6","Type":"ContainerStarted","Data":"4d76e285b9bd7b51426cb0902a30b46ae80006ea7e5671b94b269346661a3f6e"} Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.233952 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.280485 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities\") pod \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.280633 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content\") pod \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.280796 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brmc6\" (UniqueName: \"kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6\") pod \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\" (UID: \"86fd5359-56b1-4eb8-84ab-e4d39abc824d\") " Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.282732 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities" (OuterVolumeSpecName: "utilities") pod "86fd5359-56b1-4eb8-84ab-e4d39abc824d" (UID: "86fd5359-56b1-4eb8-84ab-e4d39abc824d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.382381 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.413618 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86fd5359-56b1-4eb8-84ab-e4d39abc824d" (UID: "86fd5359-56b1-4eb8-84ab-e4d39abc824d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.415273 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6" (OuterVolumeSpecName: "kube-api-access-brmc6") pod "86fd5359-56b1-4eb8-84ab-e4d39abc824d" (UID: "86fd5359-56b1-4eb8-84ab-e4d39abc824d"). InnerVolumeSpecName "kube-api-access-brmc6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.484470 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brmc6\" (UniqueName: \"kubernetes.io/projected/86fd5359-56b1-4eb8-84ab-e4d39abc824d-kube-api-access-brmc6\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.724634 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86fd5359-56b1-4eb8-84ab-e4d39abc824d-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.741599 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lctrl" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.741649 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lctrl" event={"ID":"86fd5359-56b1-4eb8-84ab-e4d39abc824d","Type":"ContainerDied","Data":"3568123769f56a1ce6d5b9841615c0f162f65baba4b39bc9c52b106be80cf77e"} Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.741769 4998 scope.go:117] "RemoveContainer" containerID="7be491ba98710a5a38421a1bec85ea990fef9a8d5b965202ef79df4980e87ac6" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.742858 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.759550 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.770019 4998 scope.go:117] "RemoveContainer" containerID="a9339bc74712657e17aa932f9d21c583c6068a279b1ed67c46f4e13416dc1b25" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.777286 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-c5748848d-pn8m5" podStartSLOduration=35.777255048 podStartE2EDuration="35.777255048s" podCreationTimestamp="2025-12-08 18:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:28.773040914 +0000 UTC m=+232.421083604" watchObservedRunningTime="2025-12-08 18:55:28.777255048 +0000 UTC m=+232.425297758" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.808108 4998 scope.go:117] "RemoveContainer" containerID="ed0b6c1c72889738cf612798601709456d67aa8b8e2d50693ba60473a6015bc5" Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.840554 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:55:28 crc kubenswrapper[4998]: I1208 18:55:28.844988 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lctrl"] Dec 08 18:55:29 crc kubenswrapper[4998]: I1208 18:55:29.373044 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" path="/var/lib/kubelet/pods/86fd5359-56b1-4eb8-84ab-e4d39abc824d/volumes" Dec 08 18:55:30 crc kubenswrapper[4998]: I1208 18:55:30.160490 4998 patch_prober.go:28] interesting pod/downloads-747b44746d-ln56w container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" start-of-body= Dec 08 18:55:30 crc kubenswrapper[4998]: I1208 18:55:30.160579 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-ln56w" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.6:8080/\": dial tcp 10.217.0.6:8080: connect: connection refused" Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.233570 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.233673 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.249079 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-676c7bbc99-xk57m"] Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.249429 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" containerName="controller-manager" containerID="cri-o://d9af25cb52c53430761cfb600f61bd42bbdb677ed958250829c1a4f92d731693" gracePeriod=30 Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.263388 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb"] Dec 08 18:55:31 crc kubenswrapper[4998]: I1208 18:55:31.263826 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" containerName="route-controller-manager" containerID="cri-o://ebe7147dbe2d93dac0a13e503551f5bdd2cafb758921deef3dc012f044f250eb" gracePeriod=30 Dec 08 18:55:32 crc kubenswrapper[4998]: I1208 18:55:32.772133 4998 generic.go:358] "Generic (PLEG): container finished" podID="51c3121f-25f2-4ca4-8d6e-085650249cc0" containerID="ebe7147dbe2d93dac0a13e503551f5bdd2cafb758921deef3dc012f044f250eb" exitCode=0 Dec 08 18:55:32 crc kubenswrapper[4998]: I1208 18:55:32.772217 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" event={"ID":"51c3121f-25f2-4ca4-8d6e-085650249cc0","Type":"ContainerDied","Data":"ebe7147dbe2d93dac0a13e503551f5bdd2cafb758921deef3dc012f044f250eb"} Dec 08 18:55:32 crc kubenswrapper[4998]: I1208 18:55:32.773516 4998 generic.go:358] "Generic (PLEG): container finished" podID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" containerID="d9af25cb52c53430761cfb600f61bd42bbdb677ed958250829c1a4f92d731693" exitCode=0 Dec 08 18:55:32 crc kubenswrapper[4998]: I1208 18:55:32.773600 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" event={"ID":"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8","Type":"ContainerDied","Data":"d9af25cb52c53430761cfb600f61bd42bbdb677ed958250829c1a4f92d731693"} Dec 08 18:55:33 crc kubenswrapper[4998]: I1208 18:55:33.987219 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.021610 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022328 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="registry-server" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022356 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="registry-server" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022375 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="extract-utilities" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022384 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="extract-utilities" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022402 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" containerName="route-controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022411 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" containerName="route-controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022431 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="extract-content" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022438 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="extract-content" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022543 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="86fd5359-56b1-4eb8-84ab-e4d39abc824d" containerName="registry-server" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.022560 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" containerName="route-controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.026667 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.035308 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca\") pod \"51c3121f-25f2-4ca4-8d6e-085650249cc0\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.035389 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config\") pod \"51c3121f-25f2-4ca4-8d6e-085650249cc0\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.035495 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert\") pod \"51c3121f-25f2-4ca4-8d6e-085650249cc0\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.035626 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fpxdh\" (UniqueName: \"kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh\") pod \"51c3121f-25f2-4ca4-8d6e-085650249cc0\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.035715 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp\") pod \"51c3121f-25f2-4ca4-8d6e-085650249cc0\" (UID: \"51c3121f-25f2-4ca4-8d6e-085650249cc0\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.036470 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config" (OuterVolumeSpecName: "config") pod "51c3121f-25f2-4ca4-8d6e-085650249cc0" (UID: "51c3121f-25f2-4ca4-8d6e-085650249cc0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.037821 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp" (OuterVolumeSpecName: "tmp") pod "51c3121f-25f2-4ca4-8d6e-085650249cc0" (UID: "51c3121f-25f2-4ca4-8d6e-085650249cc0"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.038159 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca" (OuterVolumeSpecName: "client-ca") pod "51c3121f-25f2-4ca4-8d6e-085650249cc0" (UID: "51c3121f-25f2-4ca4-8d6e-085650249cc0"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.047447 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "51c3121f-25f2-4ca4-8d6e-085650249cc0" (UID: "51c3121f-25f2-4ca4-8d6e-085650249cc0"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.047713 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh" (OuterVolumeSpecName: "kube-api-access-fpxdh") pod "51c3121f-25f2-4ca4-8d6e-085650249cc0" (UID: "51c3121f-25f2-4ca4-8d6e-085650249cc0"). InnerVolumeSpecName "kube-api-access-fpxdh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.053756 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137519 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-config\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137568 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4856cf8e-176d-43fc-9d92-3ca53d5b7718-tmp\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137671 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-client-ca\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137794 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvl28\" (UniqueName: \"kubernetes.io/projected/4856cf8e-176d-43fc-9d92-3ca53d5b7718-kube-api-access-hvl28\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137852 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4856cf8e-176d-43fc-9d92-3ca53d5b7718-serving-cert\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137909 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/51c3121f-25f2-4ca4-8d6e-085650249cc0-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137924 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fpxdh\" (UniqueName: \"kubernetes.io/projected/51c3121f-25f2-4ca4-8d6e-085650249cc0-kube-api-access-fpxdh\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137934 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/51c3121f-25f2-4ca4-8d6e-085650249cc0-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137945 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.137954 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/51c3121f-25f2-4ca4-8d6e-085650249cc0-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239124 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4856cf8e-176d-43fc-9d92-3ca53d5b7718-serving-cert\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239196 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-config\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239248 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4856cf8e-176d-43fc-9d92-3ca53d5b7718-tmp\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239816 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/4856cf8e-176d-43fc-9d92-3ca53d5b7718-tmp\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239898 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-client-ca\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.239920 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hvl28\" (UniqueName: \"kubernetes.io/projected/4856cf8e-176d-43fc-9d92-3ca53d5b7718-kube-api-access-hvl28\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.240518 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-config\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.240546 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4856cf8e-176d-43fc-9d92-3ca53d5b7718-client-ca\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.245013 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4856cf8e-176d-43fc-9d92-3ca53d5b7718-serving-cert\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.372611 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvl28\" (UniqueName: \"kubernetes.io/projected/4856cf8e-176d-43fc-9d92-3ca53d5b7718-kube-api-access-hvl28\") pod \"route-controller-manager-5d74dcb5f7-58dwh\" (UID: \"4856cf8e-176d-43fc-9d92-3ca53d5b7718\") " pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.394564 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.436201 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f49df8759-xdcc9"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.436916 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" containerName="controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.436946 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" containerName="controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.437072 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" containerName="controller-manager" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.445985 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459059 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459171 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459237 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459279 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459328 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5kx\" (UniqueName: \"kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459404 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp\") pod \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\" (UID: \"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8\") " Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459502 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-client-ca\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459546 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-proxy-ca-bundles\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459572 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0724f21-53ad-44bd-a12e-e26537dbf0ae-serving-cert\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459626 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-config\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459678 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whq69\" (UniqueName: \"kubernetes.io/projected/b0724f21-53ad-44bd-a12e-e26537dbf0ae-kube-api-access-whq69\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.459732 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0724f21-53ad-44bd-a12e-e26537dbf0ae-tmp\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.460345 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f49df8759-xdcc9"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.460552 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca" (OuterVolumeSpecName: "client-ca") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.460953 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.461365 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config" (OuterVolumeSpecName: "config") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.463246 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp" (OuterVolumeSpecName: "tmp") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.465437 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx" (OuterVolumeSpecName: "kube-api-access-vd5kx") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "kube-api-access-vd5kx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.468090 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" (UID: "25b53d3d-812c-4cf9-975b-cdb1e10bd5a8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560159 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0724f21-53ad-44bd-a12e-e26537dbf0ae-tmp\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560212 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-client-ca\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560350 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-proxy-ca-bundles\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560400 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0724f21-53ad-44bd-a12e-e26537dbf0ae-serving-cert\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560665 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-config\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560857 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-whq69\" (UniqueName: \"kubernetes.io/projected/b0724f21-53ad-44bd-a12e-e26537dbf0ae-kube-api-access-whq69\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561000 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561108 4998 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-serving-cert\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561212 4998 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561305 4998 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-config\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561390 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-client-ca\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561949 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-proxy-ca-bundles\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.561399 4998 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-client-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.562041 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vd5kx\" (UniqueName: \"kubernetes.io/projected/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8-kube-api-access-vd5kx\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.562066 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b0724f21-53ad-44bd-a12e-e26537dbf0ae-config\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.560782 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b0724f21-53ad-44bd-a12e-e26537dbf0ae-tmp\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.564284 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b0724f21-53ad-44bd-a12e-e26537dbf0ae-serving-cert\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.578929 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-whq69\" (UniqueName: \"kubernetes.io/projected/b0724f21-53ad-44bd-a12e-e26537dbf0ae-kube-api-access-whq69\") pod \"controller-manager-7f49df8759-xdcc9\" (UID: \"b0724f21-53ad-44bd-a12e-e26537dbf0ae\") " pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.633239 4998 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.633896 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://6cdbaa0c4eef1fcaea7d8f929f2a5f9fbf498aac9c6d6f7b551d1b60c2e623b4" gracePeriod=15 Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.633975 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a" gracePeriod=15 Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.633921 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0" gracePeriod=15 Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.633947 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38" gracePeriod=15 Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.634329 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd" gracePeriod=15 Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.637742 4998 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638902 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638920 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638933 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638943 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638957 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638963 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638976 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.638982 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639003 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639009 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639026 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639031 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639042 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639048 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639056 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639063 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639070 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639076 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639179 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639191 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639203 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639213 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639223 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639234 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639244 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639363 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639371 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639475 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.639484 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.640667 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.776107 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.939591 4998 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.939871 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.940169 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.941578 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.942055 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.942632 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.943151 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.943509 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.970089 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.970752 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.970996 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.971190 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:34 crc kubenswrapper[4998]: I1208 18:55:34.971339 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.022856 4998 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" start-of-body= Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.023069 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="Get \"https://192.168.126.11:6443/readyz\": dial tcp 192.168.126.11:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.024385 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event=< Dec 08 18:55:35 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f525c3a6a23ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:6443/readyz": dial tcp 192.168.126.11:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: body: Dec 08 18:55:35 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,LastTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:55:35 crc kubenswrapper[4998]: > Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.045512 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.045974 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.047745 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.048332 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.048543 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.048752 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.084740 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.085330 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.085364 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.085416 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.085823 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.085863 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.087133 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.087652 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.087796 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.088081 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.160396 4998 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52" Netns:"/var/run/netns/beff1637-8df2-4676-9b2a-3e61ae865889" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.160541 4998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52" Netns:"/var/run/netns/beff1637-8df2-4676-9b2a-3e61ae865889" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.160581 4998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52" Netns:"/var/run/netns/beff1637-8df2-4676-9b2a-3e61ae865889" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.160711 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager(4856cf8e-176d-43fc-9d92-3ca53d5b7718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager(4856cf8e-176d-43fc-9d92-3ca53d5b7718)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52\\\" Netns:\\\"/var/run/netns/beff1637-8df2-4676-9b2a-3e61ae865889\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=f25ac4e0b830e8a4bd0d72d548a623bca30be5672c82ca9778312d88ca252c52;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s\\\": dial tcp 38.102.83.145:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.276486 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" event={"ID":"51c3121f-25f2-4ca4-8d6e-085650249cc0","Type":"ContainerDied","Data":"fdf3fd17f275ff2d13e0a88bfd94ce98b573eee9ec3917d00b503c711addd88a"} Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.276798 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.276934 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" event={"ID":"25b53d3d-812c-4cf9-975b-cdb1e10bd5a8","Type":"ContainerDied","Data":"5e507d6ccf7512a99410c208eee91eb111a8019ed54e23f4bdb8ab5e09f8b0de"} Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.276969 4998 scope.go:117] "RemoveContainer" containerID="ebe7147dbe2d93dac0a13e503551f5bdd2cafb758921deef3dc012f044f250eb" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.278203 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.279654 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.281262 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.288946 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.288985 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.289086 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.289172 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.289222 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.304466 4998 scope.go:117] "RemoveContainer" containerID="d9af25cb52c53430761cfb600f61bd42bbdb677ed958250829c1a4f92d731693" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.360800 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.361928 4998 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.393739 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.393836 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.393874 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.393965 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.393987 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.394082 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.395389 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.395399 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.395429 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.395741 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.464421 4998 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5" Netns:"/var/run/netns/2a34cbed-9f92-4ad0-ba50-490028c306ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.464541 4998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5" Netns:"/var/run/netns/2a34cbed-9f92-4ad0-ba50-490028c306ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.464563 4998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Dec 08 18:55:35 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5" Netns:"/var/run/netns/2a34cbed-9f92-4ad0-ba50-490028c306ac" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:35 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:35 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:35 crc kubenswrapper[4998]: E1208 18:55:35.464656 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5\\\" Netns:\\\"/var/run/netns/2a34cbed-9f92-4ad0-ba50-490028c306ac\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=227f28ab489c4526ec240f79eb480117a02f662364ce4fcc71c11785e3c035b5;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s\\\": dial tcp 38.102.83.145:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.663426 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:35 crc kubenswrapper[4998]: W1208 18:55:35.682454 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf7dbc7e1ee9c187a863ef9b473fad27b.slice/crio-55e6f384de1585aa62e11b76a143731c4880bd02fa695325715b665f92d3f1bb WatchSource:0}: Error finding container 55e6f384de1585aa62e11b76a143731c4880bd02fa695325715b665f92d3f1bb: Status 404 returned error can't find the container with id 55e6f384de1585aa62e11b76a143731c4880bd02fa695325715b665f92d3f1bb Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.809498 4998 generic.go:358] "Generic (PLEG): container finished" podID="b17b1a4a-1d39-41b9-a555-7059787fe36d" containerID="7bbc83ba5747ddf995c017daa22d564aacd547a540d18d8fa59ab07cbec58109" exitCode=0 Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.809649 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"b17b1a4a-1d39-41b9-a555-7059787fe36d","Type":"ContainerDied","Data":"7bbc83ba5747ddf995c017daa22d564aacd547a540d18d8fa59ab07cbec58109"} Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.810720 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.811003 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.811517 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.812396 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.813863 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.814828 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="6cdbaa0c4eef1fcaea7d8f929f2a5f9fbf498aac9c6d6f7b551d1b60c2e623b4" exitCode=0 Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.814928 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38" exitCode=0 Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.815013 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0" exitCode=0 Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.815104 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a" exitCode=2 Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.814930 4998 scope.go:117] "RemoveContainer" containerID="1ceebc8ad91c67946c2d044d99acf49013aba6fda138772c7ce05fc2c32acd5a" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.816885 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"55e6f384de1585aa62e11b76a143731c4880bd02fa695325715b665f92d3f1bb"} Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.817962 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.818247 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.818384 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:35 crc kubenswrapper[4998]: I1208 18:55:35.819408 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.560592 4998 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8" Netns:"/var/run/netns/a059d834-6c93-45b1-a76c-f2763accdd7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.561022 4998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8" Netns:"/var/run/netns/a059d834-6c93-45b1-a76c-f2763accdd7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.561045 4998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8" Netns:"/var/run/netns/a059d834-6c93-45b1-a76c-f2763accdd7f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.561118 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8\\\" Netns:\\\"/var/run/netns/a059d834-6c93-45b1-a76c-f2763accdd7f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=74f5e9daa25730b9781c96f34328b7aa9930e241ab22565596d18512fb5009f8;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s\\\": dial tcp 38.102.83.145:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.578808 4998 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e" Netns:"/var/run/netns/1ecdde4c-be41-42fd-9388-8cde156a84c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.578878 4998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e" Netns:"/var/run/netns/1ecdde4c-be41-42fd-9388-8cde156a84c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.578899 4998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Dec 08 18:55:36 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e" Netns:"/var/run/netns/1ecdde4c-be41-42fd-9388-8cde156a84c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718" Path:"" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:36 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:36 crc kubenswrapper[4998]: > pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:36 crc kubenswrapper[4998]: E1208 18:55:36.578973 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager(4856cf8e-176d-43fc-9d92-3ca53d5b7718)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager(4856cf8e-176d-43fc-9d92-3ca53d5b7718)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_route-controller-manager-5d74dcb5f7-58dwh_openshift-route-controller-manager_4856cf8e-176d-43fc-9d92-3ca53d5b7718_0(64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e): error adding pod openshift-route-controller-manager_route-controller-manager-5d74dcb5f7-58dwh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e\\\" Netns:\\\"/var/run/netns/1ecdde4c-be41-42fd-9388-8cde156a84c3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-route-controller-manager;K8S_POD_NAME=route-controller-manager-5d74dcb5f7-58dwh;K8S_POD_INFRA_CONTAINER_ID=64032e03e6e9f7949d08fa30a160927fe62a256186784fc047e4c6ab2da7cd0e;K8S_POD_UID=4856cf8e-176d-43fc-9d92-3ca53d5b7718\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh] networking: Multus: [openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh/4856cf8e-176d-43fc-9d92-3ca53d5b7718]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: SetNetworkStatus: failed to update the pod route-controller-manager-5d74dcb5f7-58dwh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-5d74dcb5f7-58dwh?timeout=1m0s\\\": dial tcp 38.102.83.145:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" Dec 08 18:55:36 crc kubenswrapper[4998]: I1208 18:55:36.827197 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.151377 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.152862 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.153364 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.153906 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.245497 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access\") pod \"b17b1a4a-1d39-41b9-a555-7059787fe36d\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.245560 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir\") pod \"b17b1a4a-1d39-41b9-a555-7059787fe36d\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.245752 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock\") pod \"b17b1a4a-1d39-41b9-a555-7059787fe36d\" (UID: \"b17b1a4a-1d39-41b9-a555-7059787fe36d\") " Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.245891 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b17b1a4a-1d39-41b9-a555-7059787fe36d" (UID: "b17b1a4a-1d39-41b9-a555-7059787fe36d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.245975 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock" (OuterVolumeSpecName: "var-lock") pod "b17b1a4a-1d39-41b9-a555-7059787fe36d" (UID: "b17b1a4a-1d39-41b9-a555-7059787fe36d"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.246076 4998 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.246101 4998 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b17b1a4a-1d39-41b9-a555-7059787fe36d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.254078 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b17b1a4a-1d39-41b9-a555-7059787fe36d" (UID: "b17b1a4a-1d39-41b9-a555-7059787fe36d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.347508 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b17b1a4a-1d39-41b9-a555-7059787fe36d-kube-api-access\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.396563 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.397425 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.397944 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.840314 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53"} Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.840588 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:37 crc kubenswrapper[4998]: E1208 18:55:37.840932 4998 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.840943 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.841299 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.842085 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.842876 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"b17b1a4a-1d39-41b9-a555-7059787fe36d","Type":"ContainerDied","Data":"1bc76512d76359648ce265f24808478cb2d13a3d55c519c3708ee15c2d8d0a77"} Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.842908 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bc76512d76359648ce265f24808478cb2d13a3d55c519c3708ee15c2d8d0a77" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.842921 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.929438 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.935613 4998 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd" exitCode=0 Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.939139 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.939327 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:37 crc kubenswrapper[4998]: I1208 18:55:37.939474 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.779612 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.780941 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.781660 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.782025 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.782215 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.782393 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.835917 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836077 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836132 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836207 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836230 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836328 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836422 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836712 4998 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836745 4998 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.836369 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.837085 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.841130 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:55:38 crc kubenswrapper[4998]: E1208 18:55:38.873032 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event=< Dec 08 18:55:38 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f525c3a6a23ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:6443/readyz": dial tcp 192.168.126.11:6443: connect: connection refused Dec 08 18:55:38 crc kubenswrapper[4998]: body: Dec 08 18:55:38 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,LastTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:55:38 crc kubenswrapper[4998]: > Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.937728 4998 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.937756 4998 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.937768 4998 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.948496 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.949392 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.949439 4998 scope.go:117] "RemoveContainer" containerID="6cdbaa0c4eef1fcaea7d8f929f2a5f9fbf498aac9c6d6f7b551d1b60c2e623b4" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.950487 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:38 crc kubenswrapper[4998]: E1208 18:55:38.952155 4998 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.967138 4998 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.967914 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.968499 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.968843 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.969329 4998 scope.go:117] "RemoveContainer" containerID="5a81796596b2ef96687678570c186687a215278fb734093c6efbcd0eb56cbc38" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.987075 4998 scope.go:117] "RemoveContainer" containerID="48e1456f7b946f039157aade5815bbc1832d8981475aa2bb16171be1298e20d0" Dec 08 18:55:38 crc kubenswrapper[4998]: I1208 18:55:38.999482 4998 scope.go:117] "RemoveContainer" containerID="0b1a1688d545f6c6ff315d83518e17a5d53ace879436e2f572192143b8d3f29a" Dec 08 18:55:39 crc kubenswrapper[4998]: I1208 18:55:39.012543 4998 scope.go:117] "RemoveContainer" containerID="919ab07b7eba26c5a7e13d60f505722b9cc365c7f8d90b05ac50a8c3233bf6dd" Dec 08 18:55:39 crc kubenswrapper[4998]: I1208 18:55:39.029159 4998 scope.go:117] "RemoveContainer" containerID="bf39875cdf2fc8f4835cd732cdd63b1b4ca911d5b6a2045db29cda490a2dfef2" Dec 08 18:55:39 crc kubenswrapper[4998]: I1208 18:55:39.375853 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Dec 08 18:55:40 crc kubenswrapper[4998]: I1208 18:55:40.713239 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-ln56w" Dec 08 18:55:40 crc kubenswrapper[4998]: I1208 18:55:40.714036 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:40 crc kubenswrapper[4998]: I1208 18:55:40.715911 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:40 crc kubenswrapper[4998]: I1208 18:55:40.716223 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:40 crc kubenswrapper[4998]: I1208 18:55:40.716591 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.133471 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.134204 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.135201 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.135792 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.136336 4998 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:43 crc kubenswrapper[4998]: I1208 18:55:43.136409 4998 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.136937 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="200ms" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.337992 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="400ms" Dec 08 18:55:43 crc kubenswrapper[4998]: E1208 18:55:43.739661 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="800ms" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.298059 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:55:44Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:55:44Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:55:44Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-12-08T18:55:44Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.298394 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.298736 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.299067 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.299512 4998 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.299536 4998 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Dec 08 18:55:44 crc kubenswrapper[4998]: E1208 18:55:44.540823 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="1.6s" Dec 08 18:55:46 crc kubenswrapper[4998]: E1208 18:55:46.142277 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="3.2s" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.372783 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.374144 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.374616 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.375049 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.760651 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.760760 4998 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4" exitCode=1 Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.760866 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4"} Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.761439 4998 scope.go:117] "RemoveContainer" containerID="6481c4b05f65a7c33cc1e1744a218eebf5505d93ae40e2745e97c4a29a7ac2d4" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.761842 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.762261 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.762737 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.763021 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:47 crc kubenswrapper[4998]: I1208 18:55:47.763359 4998 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.771474 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.771984 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"2f5e88fe9b8281c45da1b4155b9442f0bffca0a8242c5e3477925005cbda76a8"} Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.773154 4998 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.773644 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.773974 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.774461 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: I1208 18:55:48.774950 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:48 crc kubenswrapper[4998]: E1208 18:55:48.873999 4998 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.145:6443: connect: connection refused" event=< Dec 08 18:55:48 crc kubenswrapper[4998]: &Event{ObjectMeta:{kube-apiserver-crc.187f525c3a6a23ef openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:6443/readyz": dial tcp 192.168.126.11:6443: connect: connection refused Dec 08 18:55:48 crc kubenswrapper[4998]: body: Dec 08 18:55:48 crc kubenswrapper[4998]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,LastTimestamp:2025-12-08 18:55:35.023027183 +0000 UTC m=+238.671069883,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Dec 08 18:55:48 crc kubenswrapper[4998]: > Dec 08 18:55:49 crc kubenswrapper[4998]: E1208 18:55:49.343962 4998 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.145:6443: connect: connection refused" interval="6.4s" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.365386 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.365397 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.366145 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.366959 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.367561 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.368078 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.368523 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.368994 4998 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.390131 4998 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.390200 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.390812 4998 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.391160 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:50 crc kubenswrapper[4998]: W1208 18:55:50.434912 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57755cc5f99000cc11e193051474d4e2.slice/crio-1e0890d35dde3d8a29b4d5bed3854ee9ef6b9ba829df5523caa0ab80fc6a55a7 WatchSource:0}: Error finding container 1e0890d35dde3d8a29b4d5bed3854ee9ef6b9ba829df5523caa0ab80fc6a55a7: Status 404 returned error can't find the container with id 1e0890d35dde3d8a29b4d5bed3854ee9ef6b9ba829df5523caa0ab80fc6a55a7 Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.535428 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.783573 4998 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="55c2c69e0d0007012a21dca576d98a977d742585d4ac0547a4da2128c53c7055" exitCode=0 Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.783662 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"55c2c69e0d0007012a21dca576d98a977d742585d4ac0547a4da2128c53c7055"} Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.783984 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"1e0890d35dde3d8a29b4d5bed3854ee9ef6b9ba829df5523caa0ab80fc6a55a7"} Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.784299 4998 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.784316 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.784850 4998 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.784959 4998 status_manager.go:895] "Failed to get status for pod" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" pod="openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-route-controller-manager/pods/route-controller-manager-85454d5c6-78tdb\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.785320 4998 status_manager.go:895] "Failed to get status for pod" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.785579 4998 status_manager.go:895] "Failed to get status for pod" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" pod="openshift-controller-manager/controller-manager-676c7bbc99-xk57m" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-676c7bbc99-xk57m\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.785782 4998 status_manager.go:895] "Failed to get status for pod" podUID="0f532410-7407-41fe-b95e-d1a785d4ebfe" pod="openshift-console/downloads-747b44746d-ln56w" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-747b44746d-ln56w\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: I1208 18:55:50.786045 4998 status_manager.go:895] "Failed to get status for pod" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.145:6443: connect: connection refused" Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.818610 4998 log.go:32] "RunPodSandbox from runtime service failed" err=< Dec 08 18:55:50 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75" Netns:"/var/run/netns/d1702471-83fd-4acc-86d9-9795ebf62c10" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:50 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:50 crc kubenswrapper[4998]: > Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.818760 4998 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err=< Dec 08 18:55:50 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75" Netns:"/var/run/netns/d1702471-83fd-4acc-86d9-9795ebf62c10" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:50 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:50 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.818854 4998 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err=< Dec 08 18:55:50 crc kubenswrapper[4998]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75" Netns:"/var/run/netns/d1702471-83fd-4acc-86d9-9795ebf62c10" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae" Path:"" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get "https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s": dial tcp 38.102.83.145:6443: connect: connection refused Dec 08 18:55:50 crc kubenswrapper[4998]: ': StdinData: {"auxiliaryCNIChainName":"vendor-cni-chain","binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Dec 08 18:55:50 crc kubenswrapper[4998]: > pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:55:50 crc kubenswrapper[4998]: E1208 18:55:50.819003 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"controller-manager-7f49df8759-xdcc9_openshift-controller-manager(b0724f21-53ad-44bd-a12e-e26537dbf0ae)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_controller-manager-7f49df8759-xdcc9_openshift-controller-manager_b0724f21-53ad-44bd-a12e-e26537dbf0ae_0(c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75): error adding pod openshift-controller-manager_controller-manager-7f49df8759-xdcc9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75\\\" Netns:\\\"/var/run/netns/d1702471-83fd-4acc-86d9-9795ebf62c10\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-controller-manager;K8S_POD_NAME=controller-manager-7f49df8759-xdcc9;K8S_POD_INFRA_CONTAINER_ID=c713d79d6792c7f924e5dda940588d4716e095337b3a41e2baae007492f24b75;K8S_POD_UID=b0724f21-53ad-44bd-a12e-e26537dbf0ae\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-controller-manager/controller-manager-7f49df8759-xdcc9] networking: Multus: [openshift-controller-manager/controller-manager-7f49df8759-xdcc9/b0724f21-53ad-44bd-a12e-e26537dbf0ae]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: SetNetworkStatus: failed to update the pod controller-manager-7f49df8759-xdcc9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-controller-manager/pods/controller-manager-7f49df8759-xdcc9?timeout=1m0s\\\": dial tcp 38.102.83.145:6443: connect: connection refused\\n': StdinData: {\\\"auxiliaryCNIChainName\\\":\\\"vendor-cni-chain\\\",\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" Dec 08 18:55:51 crc kubenswrapper[4998]: I1208 18:55:51.370153 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:51 crc kubenswrapper[4998]: I1208 18:55:51.375769 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:51 crc kubenswrapper[4998]: I1208 18:55:51.800226 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"dc06399b7912bfb6d8b134374493ce41094b2ba695caac1c22eca1cc89fdecd7"} Dec 08 18:55:51 crc kubenswrapper[4998]: I1208 18:55:51.800551 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"6a3e903b06f97b39d4f8faf44804806b9d8a64c13866155c1ff1cfcca88f5ef6"} Dec 08 18:55:51 crc kubenswrapper[4998]: I1208 18:55:51.800566 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"876ca47a2ad3bfd3295d7186406287afb2b8c230dc4ac58355d15912ac037dcd"} Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.264799 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.289744 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.809566 4998 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.810828 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.809877 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"707cc00b98e1e5bb41a2abed520a9b94aca321fed47fd3c1a64decbb53413532"} Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.811062 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:52 crc kubenswrapper[4998]: I1208 18:55:52.811141 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"ed9a738aba2f3d25acfe58907bd930b63de7f7abdf6d857bacb2eb1ee21d8433"} Dec 08 18:55:55 crc kubenswrapper[4998]: I1208 18:55:55.392589 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:55 crc kubenswrapper[4998]: I1208 18:55:55.392913 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:55 crc kubenswrapper[4998]: I1208 18:55:55.405614 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:57 crc kubenswrapper[4998]: W1208 18:55:57.617998 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4856cf8e_176d_43fc_9d92_3ca53d5b7718.slice/crio-da50fa4aa498dd0f0b6888eee409b9edc42c7d471ba5b35ab57c0f611c5da946 WatchSource:0}: Error finding container da50fa4aa498dd0f0b6888eee409b9edc42c7d471ba5b35ab57c0f611c5da946: Status 404 returned error can't find the container with id da50fa4aa498dd0f0b6888eee409b9edc42c7d471ba5b35ab57c0f611c5da946 Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.823304 4998 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.823659 4998 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.836735 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" event={"ID":"4856cf8e-176d-43fc-9d92-3ca53d5b7718","Type":"ContainerStarted","Data":"ec14e231e4412ebf6b13a4cb4f7f9e5c0c707c88ac8bc4727b9c7d6399fd34cd"} Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.836780 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" event={"ID":"4856cf8e-176d-43fc-9d92-3ca53d5b7718","Type":"ContainerStarted","Data":"da50fa4aa498dd0f0b6888eee409b9edc42c7d471ba5b35ab57c0f611c5da946"} Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.837800 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.839359 4998 patch_prober.go:28] interesting pod/route-controller-manager-5d74dcb5f7-58dwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" start-of-body= Dec 08 18:55:57 crc kubenswrapper[4998]: I1208 18:55:57.839406 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": dial tcp 10.217.0.65:8443: connect: connection refused" Dec 08 18:55:58 crc kubenswrapper[4998]: I1208 18:55:58.843096 4998 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:58 crc kubenswrapper[4998]: I1208 18:55:58.843131 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:58 crc kubenswrapper[4998]: I1208 18:55:58.853956 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:55:58 crc kubenswrapper[4998]: I1208 18:55:58.858647 4998 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="f8263149-8ff9-4af5-964c-194b50380de5" Dec 08 18:55:59 crc kubenswrapper[4998]: I1208 18:55:59.842478 4998 patch_prober.go:28] interesting pod/route-controller-manager-5d74dcb5f7-58dwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:55:59 crc kubenswrapper[4998]: I1208 18:55:59.842570 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" Dec 08 18:55:59 crc kubenswrapper[4998]: I1208 18:55:59.846783 4998 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:55:59 crc kubenswrapper[4998]: I1208 18:55:59.847513 4998 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3bbf1d2f-fd23-4a18-96bc-cfec142c5909" Dec 08 18:56:01 crc kubenswrapper[4998]: I1208 18:56:01.234482 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:56:01 crc kubenswrapper[4998]: I1208 18:56:01.234604 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:56:02 crc kubenswrapper[4998]: I1208 18:56:02.819220 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Dec 08 18:56:04 crc kubenswrapper[4998]: I1208 18:56:04.365926 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:56:04 crc kubenswrapper[4998]: I1208 18:56:04.366511 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:56:04 crc kubenswrapper[4998]: I1208 18:56:04.876908 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" event={"ID":"b0724f21-53ad-44bd-a12e-e26537dbf0ae","Type":"ContainerStarted","Data":"cd6a10a4647ede3e9c1358c892105c26438e8dcfbd8bd26d611f5b062a345828"} Dec 08 18:56:05 crc kubenswrapper[4998]: I1208 18:56:05.886550 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" event={"ID":"b0724f21-53ad-44bd-a12e-e26537dbf0ae","Type":"ContainerStarted","Data":"78f9d2a9cb556450d8dd6394c41f8621321b076e6f4cd08b4bdbbc60619172a1"} Dec 08 18:56:05 crc kubenswrapper[4998]: I1208 18:56:05.886980 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:56:06 crc kubenswrapper[4998]: I1208 18:56:06.887438 4998 patch_prober.go:28] interesting pod/controller-manager-7f49df8759-xdcc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:56:06 crc kubenswrapper[4998]: I1208 18:56:06.887583 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:56:07 crc kubenswrapper[4998]: I1208 18:56:07.092461 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:07 crc kubenswrapper[4998]: I1208 18:56:07.415454 4998 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="f8263149-8ff9-4af5-964c-194b50380de5" Dec 08 18:56:07 crc kubenswrapper[4998]: I1208 18:56:07.893279 4998 patch_prober.go:28] interesting pod/controller-manager-7f49df8759-xdcc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:56:07 crc kubenswrapper[4998]: I1208 18:56:07.893786 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:56:08 crc kubenswrapper[4998]: I1208 18:56:08.160410 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Dec 08 18:56:08 crc kubenswrapper[4998]: I1208 18:56:08.343087 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Dec 08 18:56:08 crc kubenswrapper[4998]: I1208 18:56:08.508911 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Dec 08 18:56:08 crc kubenswrapper[4998]: I1208 18:56:08.889960 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Dec 08 18:56:08 crc kubenswrapper[4998]: I1208 18:56:08.905516 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.007587 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.279956 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.559817 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.617792 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.618923 4998 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.620969 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podStartSLOduration=38.620945502 podStartE2EDuration="38.620945502s" podCreationTimestamp="2025-12-08 18:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:55:57.855620241 +0000 UTC m=+261.503662931" watchObservedRunningTime="2025-12-08 18:56:09.620945502 +0000 UTC m=+273.268988222" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.623885 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podStartSLOduration=38.623834941 podStartE2EDuration="38.623834941s" podCreationTimestamp="2025-12-08 18:55:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:56:05.906346938 +0000 UTC m=+269.554389648" watchObservedRunningTime="2025-12-08 18:56:09.623834941 +0000 UTC m=+273.271877671" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.626771 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-676c7bbc99-xk57m","openshift-kube-apiserver/kube-apiserver-crc","openshift-route-controller-manager/route-controller-manager-85454d5c6-78tdb"] Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.626847 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.626868 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh","openshift-controller-manager/controller-manager-7f49df8759-xdcc9"] Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.655363 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=12.655343636 podStartE2EDuration="12.655343636s" podCreationTimestamp="2025-12-08 18:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:56:09.65402321 +0000 UTC m=+273.302065900" watchObservedRunningTime="2025-12-08 18:56:09.655343636 +0000 UTC m=+273.303386336" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.844114 4998 patch_prober.go:28] interesting pod/route-controller-manager-5d74dcb5f7-58dwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.844233 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:56:09 crc kubenswrapper[4998]: I1208 18:56:09.959472 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.068466 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.104410 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.181328 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.362205 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.467364 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.467531 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.475770 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.597481 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.627770 4998 patch_prober.go:28] interesting pod/controller-manager-7f49df8759-xdcc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.627868 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.671822 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.834302 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.844929 4998 patch_prober.go:28] interesting pod/route-controller-manager-5d74dcb5f7-58dwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.845015 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 18:56:10 crc kubenswrapper[4998]: I1208 18:56:10.928300 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.040494 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.054807 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.090653 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.247078 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.376902 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25b53d3d-812c-4cf9-975b-cdb1e10bd5a8" path="/var/lib/kubelet/pods/25b53d3d-812c-4cf9-975b-cdb1e10bd5a8/volumes" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.377509 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51c3121f-25f2-4ca4-8d6e-085650249cc0" path="/var/lib/kubelet/pods/51c3121f-25f2-4ca4-8d6e-085650249cc0/volumes" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.590650 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.665252 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.684253 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.698273 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.729300 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.745274 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.764566 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.806842 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.861179 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.871518 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.901311 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:56:11 crc kubenswrapper[4998]: I1208 18:56:11.904789 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.002479 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.107964 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.124939 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.163114 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.184019 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.224589 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.330837 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.370129 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.417492 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.544053 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.547954 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.684460 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.713511 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.850926 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.945406 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Dec 08 18:56:12 crc kubenswrapper[4998]: I1208 18:56:12.979346 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.027323 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.082746 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.145972 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.166868 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.203932 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.257446 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.261147 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.284827 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.289629 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.313950 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.314843 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.321111 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.337858 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.362457 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.411502 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.419617 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.665990 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.702109 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.837384 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.845918 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.849186 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Dec 08 18:56:13 crc kubenswrapper[4998]: I1208 18:56:13.911018 4998 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.048211 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.059640 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.310017 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.346216 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.347389 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.352131 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.409158 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.449459 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.490363 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.625350 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.636320 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.696926 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.817389 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.836271 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.882787 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.887366 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.905064 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.941915 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:14 crc kubenswrapper[4998]: I1208 18:56:14.994349 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.032043 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.065319 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.113614 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.155038 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.155497 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.267600 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.283789 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.332032 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.418327 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.444738 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.485654 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.502414 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.731975 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.809880 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.841581 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.890853 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.905387 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.964110 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Dec 08 18:56:15 crc kubenswrapper[4998]: I1208 18:56:15.998165 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.041229 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.044806 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.071621 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.080281 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.090846 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.107262 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.129248 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.146657 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.159599 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.167995 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.180821 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.237672 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.399633 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.498000 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.538311 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.870503 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.892719 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Dec 08 18:56:16 crc kubenswrapper[4998]: I1208 18:56:16.931649 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.001994 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.043161 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.045558 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.070005 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.121643 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.129870 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.142136 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.539945 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.561746 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.586607 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.606572 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.622824 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.665635 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.754607 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.765739 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.809506 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.825881 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.882574 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.896750 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.908281 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.929852 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.973201 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:17 crc kubenswrapper[4998]: I1208 18:56:17.978448 4998 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.010582 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.032560 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.065858 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.112300 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.175044 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.349903 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.404974 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.409744 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.500041 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.578072 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.666576 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.697262 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.883350 4998 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:56:18 crc kubenswrapper[4998]: I1208 18:56:18.895666 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.016878 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.029772 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.106099 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.204028 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.248707 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.280323 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.350014 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.403646 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.441007 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.476010 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.499964 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.541137 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.634197 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.683635 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.716872 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.781924 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Dec 08 18:56:19 crc kubenswrapper[4998]: I1208 18:56:19.805462 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.077578 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.091806 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.166441 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.319070 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.346396 4998 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.346817 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53" gracePeriod=5 Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.444523 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.446491 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.453752 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.509365 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.543558 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.628465 4998 patch_prober.go:28] interesting pod/controller-manager-7f49df8759-xdcc9 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.628816 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" podUID="b0724f21-53ad-44bd-a12e-e26537dbf0ae" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.66:8443/healthz\": context deadline exceeded" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.630426 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.720341 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.826339 4998 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.826907 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.826907 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.830758 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.831104 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.845486 4998 patch_prober.go:28] interesting pod/route-controller-manager-5d74dcb5f7-58dwh container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" start-of-body= Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.845632 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" podUID="4856cf8e-176d-43fc-9d92-3ca53d5b7718" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.65:8443/healthz\": context deadline exceeded" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.862741 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.881400 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.887418 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.897436 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.943208 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.994802 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:20 crc kubenswrapper[4998]: I1208 18:56:20.999445 4998 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.108454 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.120740 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.165830 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.444447 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.688549 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.770542 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.819092 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Dec 08 18:56:21 crc kubenswrapper[4998]: I1208 18:56:21.943616 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.048885 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.268279 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.507102 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.522894 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.562395 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.646578 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.819172 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.843318 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.857763 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Dec 08 18:56:22 crc kubenswrapper[4998]: I1208 18:56:22.902200 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.035301 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.104349 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.177969 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.597920 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.604766 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.648717 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Dec 08 18:56:23 crc kubenswrapper[4998]: I1208 18:56:23.936644 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Dec 08 18:56:24 crc kubenswrapper[4998]: I1208 18:56:24.055580 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Dec 08 18:56:24 crc kubenswrapper[4998]: I1208 18:56:24.307632 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Dec 08 18:56:24 crc kubenswrapper[4998]: I1208 18:56:24.567301 4998 ???:1] "http: TLS handshake error from 192.168.126.11:51362: no serving certificate available for the kubelet" Dec 08 18:56:24 crc kubenswrapper[4998]: I1208 18:56:24.781192 4998 ???:1] "http: TLS handshake error from 192.168.126.11:51374: no serving certificate available for the kubelet" Dec 08 18:56:25 crc kubenswrapper[4998]: I1208 18:56:25.954572 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 18:56:25 crc kubenswrapper[4998]: I1208 18:56:25.954666 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:56:25 crc kubenswrapper[4998]: I1208 18:56:25.956413 4998 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005427 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005496 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005562 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005630 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005857 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005767 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005788 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.005877 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.006056 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.006524 4998 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.006597 4998 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.006624 4998 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.006648 4998 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.018967 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.071832 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.071930 4998 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53" exitCode=137 Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.072061 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.072081 4998 scope.go:117] "RemoveContainer" containerID="f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.093498 4998 scope.go:117] "RemoveContainer" containerID="f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.093606 4998 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 18:56:26 crc kubenswrapper[4998]: E1208 18:56:26.093969 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53\": container with ID starting with f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53 not found: ID does not exist" containerID="f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.094023 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53"} err="failed to get container status \"f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53\": rpc error: code = NotFound desc = could not find container \"f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53\": container with ID starting with f872a7038d06304a3c4191099689c88430175d6bb23af1fb509bab7a5d200a53 not found: ID does not exist" Dec 08 18:56:26 crc kubenswrapper[4998]: I1208 18:56:26.108146 4998 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:27 crc kubenswrapper[4998]: I1208 18:56:27.373238 4998 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Dec 08 18:56:27 crc kubenswrapper[4998]: I1208 18:56:27.373402 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Dec 08 18:56:29 crc kubenswrapper[4998]: I1208 18:56:29.633727 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f49df8759-xdcc9" Dec 08 18:56:29 crc kubenswrapper[4998]: I1208 18:56:29.857142 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-5d74dcb5f7-58dwh" Dec 08 18:56:31 crc kubenswrapper[4998]: I1208 18:56:31.233329 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:56:31 crc kubenswrapper[4998]: I1208 18:56:31.233414 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:56:31 crc kubenswrapper[4998]: I1208 18:56:31.233493 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:56:31 crc kubenswrapper[4998]: I1208 18:56:31.234282 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:56:31 crc kubenswrapper[4998]: I1208 18:56:31.234333 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157" gracePeriod=600 Dec 08 18:56:32 crc kubenswrapper[4998]: I1208 18:56:32.106849 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157" exitCode=0 Dec 08 18:56:32 crc kubenswrapper[4998]: I1208 18:56:32.107019 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157"} Dec 08 18:56:32 crc kubenswrapper[4998]: I1208 18:56:32.107779 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a"} Dec 08 18:56:33 crc kubenswrapper[4998]: I1208 18:56:33.747174 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Dec 08 18:56:35 crc kubenswrapper[4998]: I1208 18:56:35.539771 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Dec 08 18:56:36 crc kubenswrapper[4998]: I1208 18:56:36.612469 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Dec 08 18:56:37 crc kubenswrapper[4998]: I1208 18:56:37.481920 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:56:37 crc kubenswrapper[4998]: I1208 18:56:37.486497 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 18:56:39 crc kubenswrapper[4998]: I1208 18:56:39.436825 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:39 crc kubenswrapper[4998]: I1208 18:56:39.929080 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:40 crc kubenswrapper[4998]: I1208 18:56:40.998298 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:56:40 crc kubenswrapper[4998]: I1208 18:56:40.998621 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-lcgg8" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="registry-server" containerID="cri-o://cd9c3e386824c0b192c00174490bfea1b33f2f40b5f09ed93162137269c113ad" gracePeriod=30 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.015282 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.015839 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ljbvf" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="registry-server" containerID="cri-o://110cca7fae63e90b83e32912d38f4d637284f3e6177cf50550feba27b537908f" gracePeriod=30 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.029001 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.029309 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" containerID="cri-o://037d130d2c963e61930a7daa64232930a4f34b090a62029baf790d1b0fdafb0c" gracePeriod=30 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.036362 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.038815 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fhqvx" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="registry-server" containerID="cri-o://859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7" gracePeriod=30 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.055404 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.055935 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s4gnq" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="registry-server" containerID="cri-o://fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306" gracePeriod=30 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.061492 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-svhlx"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062210 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" containerName="installer" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062243 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" containerName="installer" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062269 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062275 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062363 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.062376 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="b17b1a4a-1d39-41b9-a555-7059787fe36d" containerName="installer" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.084779 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-svhlx"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.084893 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.175472 4998 generic.go:358] "Generic (PLEG): container finished" podID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerID="037d130d2c963e61930a7daa64232930a4f34b090a62029baf790d1b0fdafb0c" exitCode=0 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.175601 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" event={"ID":"d5cb67e5-9aca-42f2-8034-6d97ea435de5","Type":"ContainerDied","Data":"037d130d2c963e61930a7daa64232930a4f34b090a62029baf790d1b0fdafb0c"} Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.176521 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.176553 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-tmp\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.176789 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5lh7\" (UniqueName: \"kubernetes.io/projected/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-kube-api-access-p5lh7\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.176869 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.199965 4998 generic.go:358] "Generic (PLEG): container finished" podID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerID="cd9c3e386824c0b192c00174490bfea1b33f2f40b5f09ed93162137269c113ad" exitCode=0 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.200061 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerDied","Data":"cd9c3e386824c0b192c00174490bfea1b33f2f40b5f09ed93162137269c113ad"} Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.208002 4998 generic.go:358] "Generic (PLEG): container finished" podID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerID="110cca7fae63e90b83e32912d38f4d637284f3e6177cf50550feba27b537908f" exitCode=0 Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.208136 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerDied","Data":"110cca7fae63e90b83e32912d38f4d637284f3e6177cf50550feba27b537908f"} Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.277867 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-p5lh7\" (UniqueName: \"kubernetes.io/projected/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-kube-api-access-p5lh7\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.277928 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.277961 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.277981 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-tmp\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.278581 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-tmp\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.279937 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.287594 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.297552 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5lh7\" (UniqueName: \"kubernetes.io/projected/86eb9ffe-5899-4b11-b5bf-1d3dbf103a67-kube-api-access-p5lh7\") pod \"marketplace-operator-547dbd544d-svhlx\" (UID: \"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.415359 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.552635 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.678373 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.687341 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82n6c\" (UniqueName: \"kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c\") pod \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.687402 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content\") pod \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.687541 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities\") pod \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\" (UID: \"b590b4bf-59f2-41c3-9284-1a05b5931ca8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.689278 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities" (OuterVolumeSpecName: "utilities") pod "b590b4bf-59f2-41c3-9284-1a05b5931ca8" (UID: "b590b4bf-59f2-41c3-9284-1a05b5931ca8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.698351 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c" (OuterVolumeSpecName: "kube-api-access-82n6c") pod "b590b4bf-59f2-41c3-9284-1a05b5931ca8" (UID: "b590b4bf-59f2-41c3-9284-1a05b5931ca8"). InnerVolumeSpecName "kube-api-access-82n6c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.740838 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b590b4bf-59f2-41c3-9284-1a05b5931ca8" (UID: "b590b4bf-59f2-41c3-9284-1a05b5931ca8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.751145 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.788896 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content\") pod \"3b36276e-af0d-4657-912a-df7c533bf822\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.788961 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities\") pod \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.788986 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frvnh\" (UniqueName: \"kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh\") pod \"3b36276e-af0d-4657-912a-df7c533bf822\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789070 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities\") pod \"3b36276e-af0d-4657-912a-df7c533bf822\" (UID: \"3b36276e-af0d-4657-912a-df7c533bf822\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789182 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vqk5c\" (UniqueName: \"kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c\") pod \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789211 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content\") pod \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\" (UID: \"a920e838-b750-47a2-8241-bfd4d1d6f5b8\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789398 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82n6c\" (UniqueName: \"kubernetes.io/projected/b590b4bf-59f2-41c3-9284-1a05b5931ca8-kube-api-access-82n6c\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789414 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.789424 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b590b4bf-59f2-41c3-9284-1a05b5931ca8-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.791793 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities" (OuterVolumeSpecName: "utilities") pod "3b36276e-af0d-4657-912a-df7c533bf822" (UID: "3b36276e-af0d-4657-912a-df7c533bf822"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.792252 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities" (OuterVolumeSpecName: "utilities") pod "a920e838-b750-47a2-8241-bfd4d1d6f5b8" (UID: "a920e838-b750-47a2-8241-bfd4d1d6f5b8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.797209 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-svhlx"] Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.800048 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.800358 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c" (OuterVolumeSpecName: "kube-api-access-vqk5c") pod "a920e838-b750-47a2-8241-bfd4d1d6f5b8" (UID: "a920e838-b750-47a2-8241-bfd4d1d6f5b8"). InnerVolumeSpecName "kube-api-access-vqk5c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.804634 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh" (OuterVolumeSpecName: "kube-api-access-frvnh") pod "3b36276e-af0d-4657-912a-df7c533bf822" (UID: "3b36276e-af0d-4657-912a-df7c533bf822"). InnerVolumeSpecName "kube-api-access-frvnh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: W1208 18:56:41.805988 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86eb9ffe_5899_4b11_b5bf_1d3dbf103a67.slice/crio-bdea09dd588d872ed67aa06db58bf1756b681c896316e43359d5cd7f5cb0db6f WatchSource:0}: Error finding container bdea09dd588d872ed67aa06db58bf1756b681c896316e43359d5cd7f5cb0db6f: Status 404 returned error can't find the container with id bdea09dd588d872ed67aa06db58bf1756b681c896316e43359d5cd7f5cb0db6f Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.806811 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3b36276e-af0d-4657-912a-df7c533bf822" (UID: "3b36276e-af0d-4657-912a-df7c533bf822"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.808976 4998 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.845949 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891159 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f4c9t\" (UniqueName: \"kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t\") pod \"3af11570-35c5-4991-ae53-bfd38cdea120\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891219 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics\") pod \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891252 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp\") pod \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891287 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca\") pod \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891401 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content\") pod \"3af11570-35c5-4991-ae53-bfd38cdea120\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891471 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") pod \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\" (UID: \"d5cb67e5-9aca-42f2-8034-6d97ea435de5\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891535 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities\") pod \"3af11570-35c5-4991-ae53-bfd38cdea120\" (UID: \"3af11570-35c5-4991-ae53-bfd38cdea120\") " Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891845 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891872 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vqk5c\" (UniqueName: \"kubernetes.io/projected/a920e838-b750-47a2-8241-bfd4d1d6f5b8-kube-api-access-vqk5c\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891885 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3b36276e-af0d-4657-912a-df7c533bf822-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891897 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.891910 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-frvnh\" (UniqueName: \"kubernetes.io/projected/3b36276e-af0d-4657-912a-df7c533bf822-kube-api-access-frvnh\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.892924 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp" (OuterVolumeSpecName: "tmp") pod "d5cb67e5-9aca-42f2-8034-6d97ea435de5" (UID: "d5cb67e5-9aca-42f2-8034-6d97ea435de5"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.893573 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities" (OuterVolumeSpecName: "utilities") pod "3af11570-35c5-4991-ae53-bfd38cdea120" (UID: "3af11570-35c5-4991-ae53-bfd38cdea120"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.893875 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "d5cb67e5-9aca-42f2-8034-6d97ea435de5" (UID: "d5cb67e5-9aca-42f2-8034-6d97ea435de5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.894910 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "d5cb67e5-9aca-42f2-8034-6d97ea435de5" (UID: "d5cb67e5-9aca-42f2-8034-6d97ea435de5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.896161 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t" (OuterVolumeSpecName: "kube-api-access-f4c9t") pod "3af11570-35c5-4991-ae53-bfd38cdea120" (UID: "3af11570-35c5-4991-ae53-bfd38cdea120"). InnerVolumeSpecName "kube-api-access-f4c9t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.906471 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss" (OuterVolumeSpecName: "kube-api-access-bwlss") pod "d5cb67e5-9aca-42f2-8034-6d97ea435de5" (UID: "d5cb67e5-9aca-42f2-8034-6d97ea435de5"). InnerVolumeSpecName "kube-api-access-bwlss". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.913246 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a920e838-b750-47a2-8241-bfd4d1d6f5b8" (UID: "a920e838-b750-47a2-8241-bfd4d1d6f5b8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993437 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwlss\" (UniqueName: \"kubernetes.io/projected/d5cb67e5-9aca-42f2-8034-6d97ea435de5-kube-api-access-bwlss\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993745 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993757 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f4c9t\" (UniqueName: \"kubernetes.io/projected/3af11570-35c5-4991-ae53-bfd38cdea120-kube-api-access-f4c9t\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993766 4998 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993778 4998 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d5cb67e5-9aca-42f2-8034-6d97ea435de5-tmp\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993788 4998 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d5cb67e5-9aca-42f2-8034-6d97ea435de5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:41 crc kubenswrapper[4998]: I1208 18:56:41.993796 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a920e838-b750-47a2-8241-bfd4d1d6f5b8-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.021515 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3af11570-35c5-4991-ae53-bfd38cdea120" (UID: "3af11570-35c5-4991-ae53-bfd38cdea120"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.094384 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3af11570-35c5-4991-ae53-bfd38cdea120-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.215325 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" event={"ID":"d5cb67e5-9aca-42f2-8034-6d97ea435de5","Type":"ContainerDied","Data":"dbf0b2ecf4ba9c051c99bd3381ec950411dae9a86cd2bdfa5ce27f372b952344"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.215443 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-pv6bl" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.215417 4998 scope.go:117] "RemoveContainer" containerID="037d130d2c963e61930a7daa64232930a4f34b090a62029baf790d1b0fdafb0c" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.369734 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-lcgg8" event={"ID":"b590b4bf-59f2-41c3-9284-1a05b5931ca8","Type":"ContainerDied","Data":"e9f9bfd310b1d14086aa44fff039869ee6c5858a625d4abe13d7d547f5fc4e4d"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.369796 4998 scope.go:117] "RemoveContainer" containerID="cd9c3e386824c0b192c00174490bfea1b33f2f40b5f09ed93162137269c113ad" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.369945 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-lcgg8" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.373757 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ljbvf" event={"ID":"a920e838-b750-47a2-8241-bfd4d1d6f5b8","Type":"ContainerDied","Data":"a39da08c34ee47bffc4a70044a43d81c684279c8a8b5cabe6e8e831d011b6cd1"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.374122 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ljbvf" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.376466 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" event={"ID":"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67","Type":"ContainerStarted","Data":"d976ad996e78dff66b479337efbcdf80e9c5fd5a0d7347f29faf7cad9272d899"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.376539 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" event={"ID":"86eb9ffe-5899-4b11-b5bf-1d3dbf103a67","Type":"ContainerStarted","Data":"bdea09dd588d872ed67aa06db58bf1756b681c896316e43359d5cd7f5cb0db6f"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.377333 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.384572 4998 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-svhlx container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" start-of-body= Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.384858 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" podUID="86eb9ffe-5899-4b11-b5bf-1d3dbf103a67" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.67:8080/healthz\": dial tcp 10.217.0.67:8080: connect: connection refused" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.395629 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.399770 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-pv6bl"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.401225 4998 generic.go:358] "Generic (PLEG): container finished" podID="3af11570-35c5-4991-ae53-bfd38cdea120" containerID="fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306" exitCode=0 Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.401416 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s4gnq" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.401531 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerDied","Data":"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.401646 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s4gnq" event={"ID":"3af11570-35c5-4991-ae53-bfd38cdea120","Type":"ContainerDied","Data":"d4e2d5976de7e41df5083cc5404b6feab2175c3819c083fa731da04c97b5ea5e"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.403611 4998 generic.go:358] "Generic (PLEG): container finished" podID="3b36276e-af0d-4657-912a-df7c533bf822" containerID="859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7" exitCode=0 Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.403752 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerDied","Data":"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.403874 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fhqvx" event={"ID":"3b36276e-af0d-4657-912a-df7c533bf822","Type":"ContainerDied","Data":"aef07da86fd892fd11f72dd62406bebd6e12e091d616b233c476f5f936c6bca8"} Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.404088 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fhqvx" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.406751 4998 scope.go:117] "RemoveContainer" containerID="1b7579a9dc13ba7bc6b599a04153703ae4fdfc30f74d92174b2c9f250e9a8fe0" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.420272 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" podStartSLOduration=1.420251027 podStartE2EDuration="1.420251027s" podCreationTimestamp="2025-12-08 18:56:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 18:56:42.414007936 +0000 UTC m=+306.062050626" watchObservedRunningTime="2025-12-08 18:56:42.420251027 +0000 UTC m=+306.068293717" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.454970 4998 scope.go:117] "RemoveContainer" containerID="645eb174f7bd1e988f8b3117db952b91271fe6e8a0fbb44f9d8b9178417b9b01" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.459507 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.469900 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-lcgg8"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.482155 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.489618 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ljbvf"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.496809 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.499784 4998 scope.go:117] "RemoveContainer" containerID="110cca7fae63e90b83e32912d38f4d637284f3e6177cf50550feba27b537908f" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.505209 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s4gnq"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.509173 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.513956 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fhqvx"] Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.517999 4998 scope.go:117] "RemoveContainer" containerID="60e38b463df93959398594b794a89e0592b09533ea3ef48a3f9c4460544c77df" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.547202 4998 scope.go:117] "RemoveContainer" containerID="86d69b6e7cadf605e4003a9089cc1d12256107455e9a0d64c32f043738f844be" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.566278 4998 scope.go:117] "RemoveContainer" containerID="fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.583890 4998 scope.go:117] "RemoveContainer" containerID="b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.600609 4998 scope.go:117] "RemoveContainer" containerID="f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.628484 4998 scope.go:117] "RemoveContainer" containerID="fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.628879 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306\": container with ID starting with fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306 not found: ID does not exist" containerID="fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.628916 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306"} err="failed to get container status \"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306\": rpc error: code = NotFound desc = could not find container \"fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306\": container with ID starting with fe4a9d3dd00d59339da146e58b071153f524687efdb22f24646421d9d5241306 not found: ID does not exist" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.628942 4998 scope.go:117] "RemoveContainer" containerID="b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.629401 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f\": container with ID starting with b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f not found: ID does not exist" containerID="b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.629433 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f"} err="failed to get container status \"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f\": rpc error: code = NotFound desc = could not find container \"b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f\": container with ID starting with b06098ae6d078b7cc996644c06b790d0d1322a75347ae28a6022da44b8a7346f not found: ID does not exist" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.629453 4998 scope.go:117] "RemoveContainer" containerID="f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.629914 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30\": container with ID starting with f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30 not found: ID does not exist" containerID="f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.629989 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30"} err="failed to get container status \"f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30\": rpc error: code = NotFound desc = could not find container \"f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30\": container with ID starting with f056f0a5daf2ee554a9ccf66a830597e0dd69186fccb07c5ece22332489b3a30 not found: ID does not exist" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.630067 4998 scope.go:117] "RemoveContainer" containerID="859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.650677 4998 scope.go:117] "RemoveContainer" containerID="087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.665617 4998 scope.go:117] "RemoveContainer" containerID="08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.683500 4998 scope.go:117] "RemoveContainer" containerID="859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.683960 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7\": container with ID starting with 859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7 not found: ID does not exist" containerID="859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.683993 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7"} err="failed to get container status \"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7\": rpc error: code = NotFound desc = could not find container \"859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7\": container with ID starting with 859b948813ba7783cad4fa645b2ca2670530ae5c3edd224697b7b0ac90e4bbb7 not found: ID does not exist" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.684017 4998 scope.go:117] "RemoveContainer" containerID="087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.684311 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22\": container with ID starting with 087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22 not found: ID does not exist" containerID="087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.684342 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22"} err="failed to get container status \"087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22\": rpc error: code = NotFound desc = could not find container \"087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22\": container with ID starting with 087c94c93a2eafb20b1ab20c6da948bcf0c1217a90cd3495c39bc3c39dc6fb22 not found: ID does not exist" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.684366 4998 scope.go:117] "RemoveContainer" containerID="08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43" Dec 08 18:56:42 crc kubenswrapper[4998]: E1208 18:56:42.684902 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43\": container with ID starting with 08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43 not found: ID does not exist" containerID="08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43" Dec 08 18:56:42 crc kubenswrapper[4998]: I1208 18:56:42.684939 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43"} err="failed to get container status \"08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43\": rpc error: code = NotFound desc = could not find container \"08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43\": container with ID starting with 08d4e1b07de4d58e3be1711b2cb1dc5d57f36acfaa5601a9267d446d510a4c43 not found: ID does not exist" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.376187 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" path="/var/lib/kubelet/pods/3af11570-35c5-4991-ae53-bfd38cdea120/volumes" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.377270 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b36276e-af0d-4657-912a-df7c533bf822" path="/var/lib/kubelet/pods/3b36276e-af0d-4657-912a-df7c533bf822/volumes" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.377941 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" path="/var/lib/kubelet/pods/a920e838-b750-47a2-8241-bfd4d1d6f5b8/volumes" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.379263 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" path="/var/lib/kubelet/pods/b590b4bf-59f2-41c3-9284-1a05b5931ca8/volumes" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.380202 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" path="/var/lib/kubelet/pods/d5cb67e5-9aca-42f2-8034-6d97ea435de5/volumes" Dec 08 18:56:43 crc kubenswrapper[4998]: I1208 18:56:43.422143 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-svhlx" Dec 08 18:56:44 crc kubenswrapper[4998]: I1208 18:56:44.478758 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Dec 08 18:56:45 crc kubenswrapper[4998]: I1208 18:56:45.676296 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Dec 08 18:56:46 crc kubenswrapper[4998]: I1208 18:56:46.745429 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Dec 08 18:56:47 crc kubenswrapper[4998]: I1208 18:56:47.445474 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Dec 08 18:56:50 crc kubenswrapper[4998]: I1208 18:56:50.531567 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Dec 08 18:56:52 crc kubenswrapper[4998]: I1208 18:56:52.565619 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 18:56:54 crc kubenswrapper[4998]: I1208 18:56:54.179921 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Dec 08 18:56:54 crc kubenswrapper[4998]: I1208 18:56:54.998027 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Dec 08 18:57:00 crc kubenswrapper[4998]: I1208 18:57:00.259593 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Dec 08 18:57:02 crc kubenswrapper[4998]: I1208 18:57:02.385547 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Dec 08 18:57:02 crc kubenswrapper[4998]: I1208 18:57:02.902981 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Dec 08 18:57:04 crc kubenswrapper[4998]: I1208 18:57:04.244676 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Dec 08 18:57:04 crc kubenswrapper[4998]: I1208 18:57:04.302838 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Dec 08 18:57:28 crc kubenswrapper[4998]: I1208 18:57:28.513720 4998 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.152939 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-rwqk2"] Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153643 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153669 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153713 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153722 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153737 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153745 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153754 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153761 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153771 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153778 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153790 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153798 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153812 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153821 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="extract-utilities" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153832 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153839 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153853 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153860 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153871 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153888 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153900 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153907 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153917 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153924 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153934 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.153941 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="extract-content" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.154049 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3b36276e-af0d-4657-912a-df7c533bf822" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.154079 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="a920e838-b750-47a2-8241-bfd4d1d6f5b8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.154088 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="d5cb67e5-9aca-42f2-8034-6d97ea435de5" containerName="marketplace-operator" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.154102 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="b590b4bf-59f2-41c3-9284-1a05b5931ca8" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.154113 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3af11570-35c5-4991-ae53-bfd38cdea120" containerName="registry-server" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.165305 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.168282 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwqk2"] Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.173699 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.292777 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-utilities\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.292861 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psjxn\" (UniqueName: \"kubernetes.io/projected/a19ced27-321a-4373-92a2-dc7d1ba64f91-kube-api-access-psjxn\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.292918 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-catalog-content\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.393521 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-psjxn\" (UniqueName: \"kubernetes.io/projected/a19ced27-321a-4373-92a2-dc7d1ba64f91-kube-api-access-psjxn\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.393582 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-catalog-content\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.393653 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-utilities\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.394257 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-utilities\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.394485 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a19ced27-321a-4373-92a2-dc7d1ba64f91-catalog-content\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.415652 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-psjxn\" (UniqueName: \"kubernetes.io/projected/a19ced27-321a-4373-92a2-dc7d1ba64f91-kube-api-access-psjxn\") pod \"redhat-operators-rwqk2\" (UID: \"a19ced27-321a-4373-92a2-dc7d1ba64f91\") " pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.488060 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:30 crc kubenswrapper[4998]: I1208 18:57:30.740293 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-rwqk2"] Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.162135 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-fvmqj"] Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.169452 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.174003 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.185241 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvmqj"] Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.308138 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbksf\" (UniqueName: \"kubernetes.io/projected/691c8ae1-1e98-4785-93a2-dfb245bbc808-kube-api-access-kbksf\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.308224 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-catalog-content\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.308399 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-utilities\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.410250 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kbksf\" (UniqueName: \"kubernetes.io/projected/691c8ae1-1e98-4785-93a2-dfb245bbc808-kube-api-access-kbksf\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.410324 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-catalog-content\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.410439 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-utilities\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.411245 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-catalog-content\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.411265 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/691c8ae1-1e98-4785-93a2-dfb245bbc808-utilities\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.437339 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbksf\" (UniqueName: \"kubernetes.io/projected/691c8ae1-1e98-4785-93a2-dfb245bbc808-kube-api-access-kbksf\") pod \"community-operators-fvmqj\" (UID: \"691c8ae1-1e98-4785-93a2-dfb245bbc808\") " pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.486957 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.696819 4998 generic.go:358] "Generic (PLEG): container finished" podID="a19ced27-321a-4373-92a2-dc7d1ba64f91" containerID="7dc8fc7492a2d2c5339c2cd29927117257b34a815bd599f905b3d3624c59f816" exitCode=0 Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.696893 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwqk2" event={"ID":"a19ced27-321a-4373-92a2-dc7d1ba64f91","Type":"ContainerDied","Data":"7dc8fc7492a2d2c5339c2cd29927117257b34a815bd599f905b3d3624c59f816"} Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.696924 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwqk2" event={"ID":"a19ced27-321a-4373-92a2-dc7d1ba64f91","Type":"ContainerStarted","Data":"b72c00e81ca920006b53b7eccba6da1c9a107fe2b1636f90a79063e656f5c27c"} Dec 08 18:57:31 crc kubenswrapper[4998]: I1208 18:57:31.752169 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-fvmqj"] Dec 08 18:57:31 crc kubenswrapper[4998]: W1208 18:57:31.758927 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod691c8ae1_1e98_4785_93a2_dfb245bbc808.slice/crio-56d4a9e0eda9b02b0ebe7e45014d08d2990cf24a1a747a499227cc38309535b4 WatchSource:0}: Error finding container 56d4a9e0eda9b02b0ebe7e45014d08d2990cf24a1a747a499227cc38309535b4: Status 404 returned error can't find the container with id 56d4a9e0eda9b02b0ebe7e45014d08d2990cf24a1a747a499227cc38309535b4 Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.704587 4998 generic.go:358] "Generic (PLEG): container finished" podID="691c8ae1-1e98-4785-93a2-dfb245bbc808" containerID="b38748909451fa9c21e3da8bf46e1a91582de032d3015669e43dc0ffe4e17a4e" exitCode=0 Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.705047 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvmqj" event={"ID":"691c8ae1-1e98-4785-93a2-dfb245bbc808","Type":"ContainerDied","Data":"b38748909451fa9c21e3da8bf46e1a91582de032d3015669e43dc0ffe4e17a4e"} Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.705104 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvmqj" event={"ID":"691c8ae1-1e98-4785-93a2-dfb245bbc808","Type":"ContainerStarted","Data":"56d4a9e0eda9b02b0ebe7e45014d08d2990cf24a1a747a499227cc38309535b4"} Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.720339 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwqk2" event={"ID":"a19ced27-321a-4373-92a2-dc7d1ba64f91","Type":"ContainerStarted","Data":"afce64be86685b4688eaa424751684ae0ae88c30802926ec9500d85f1daedcf3"} Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.757056 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-ffnth"] Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.765445 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.767895 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.775761 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffnth"] Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.871230 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-catalog-content\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.871321 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-utilities\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.871394 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc64q\" (UniqueName: \"kubernetes.io/projected/316580e7-0956-44a5-8659-8a5f9903a5b2-kube-api-access-rc64q\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.972391 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rc64q\" (UniqueName: \"kubernetes.io/projected/316580e7-0956-44a5-8659-8a5f9903a5b2-kube-api-access-rc64q\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.972843 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-catalog-content\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.972874 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-utilities\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.973261 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-catalog-content\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:32 crc kubenswrapper[4998]: I1208 18:57:32.973443 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/316580e7-0956-44a5-8659-8a5f9903a5b2-utilities\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.004537 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rc64q\" (UniqueName: \"kubernetes.io/projected/316580e7-0956-44a5-8659-8a5f9903a5b2-kube-api-access-rc64q\") pod \"certified-operators-ffnth\" (UID: \"316580e7-0956-44a5-8659-8a5f9903a5b2\") " pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.081624 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.504541 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-ffnth"] Dec 08 18:57:33 crc kubenswrapper[4998]: W1208 18:57:33.508328 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod316580e7_0956_44a5_8659_8a5f9903a5b2.slice/crio-abf66d2b5753302364729121804738f3c3e3cbf0d5593192c6ff3858af5f70b2 WatchSource:0}: Error finding container abf66d2b5753302364729121804738f3c3e3cbf0d5593192c6ff3858af5f70b2: Status 404 returned error can't find the container with id abf66d2b5753302364729121804738f3c3e3cbf0d5593192c6ff3858af5f70b2 Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.726928 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvmqj" event={"ID":"691c8ae1-1e98-4785-93a2-dfb245bbc808","Type":"ContainerStarted","Data":"7bbd23ed6c4c3d2a38debf6b3c51acae629feec2fc8996e162e040dede0a4115"} Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.729925 4998 generic.go:358] "Generic (PLEG): container finished" podID="316580e7-0956-44a5-8659-8a5f9903a5b2" containerID="a5bee8e17566ea64e864fd4ebe901ce9c02744bc61df0b7659e2d508b1f49fda" exitCode=0 Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.730021 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffnth" event={"ID":"316580e7-0956-44a5-8659-8a5f9903a5b2","Type":"ContainerDied","Data":"a5bee8e17566ea64e864fd4ebe901ce9c02744bc61df0b7659e2d508b1f49fda"} Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.730052 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffnth" event={"ID":"316580e7-0956-44a5-8659-8a5f9903a5b2","Type":"ContainerStarted","Data":"abf66d2b5753302364729121804738f3c3e3cbf0d5593192c6ff3858af5f70b2"} Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.735571 4998 generic.go:358] "Generic (PLEG): container finished" podID="a19ced27-321a-4373-92a2-dc7d1ba64f91" containerID="afce64be86685b4688eaa424751684ae0ae88c30802926ec9500d85f1daedcf3" exitCode=0 Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.735774 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwqk2" event={"ID":"a19ced27-321a-4373-92a2-dc7d1ba64f91","Type":"ContainerDied","Data":"afce64be86685b4688eaa424751684ae0ae88c30802926ec9500d85f1daedcf3"} Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.749774 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.756042 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.761092 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.767779 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.784049 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgsq2\" (UniqueName: \"kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.784141 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.784249 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.885758 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.885890 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.885971 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qgsq2\" (UniqueName: \"kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.886885 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.886951 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:33 crc kubenswrapper[4998]: I1208 18:57:33.919317 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgsq2\" (UniqueName: \"kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2\") pod \"redhat-marketplace-fzszn\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.109659 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.534567 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 18:57:34 crc kubenswrapper[4998]: W1208 18:57:34.542635 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0576ce95_74b4_4a08_8607_85b622a77e83.slice/crio-d76e4c095c1e522c1e3d4aee7ccfd2eb7c117a8a32db08382516ec320ac04aef WatchSource:0}: Error finding container d76e4c095c1e522c1e3d4aee7ccfd2eb7c117a8a32db08382516ec320ac04aef: Status 404 returned error can't find the container with id d76e4c095c1e522c1e3d4aee7ccfd2eb7c117a8a32db08382516ec320ac04aef Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.745529 4998 generic.go:358] "Generic (PLEG): container finished" podID="691c8ae1-1e98-4785-93a2-dfb245bbc808" containerID="7bbd23ed6c4c3d2a38debf6b3c51acae629feec2fc8996e162e040dede0a4115" exitCode=0 Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.745642 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvmqj" event={"ID":"691c8ae1-1e98-4785-93a2-dfb245bbc808","Type":"ContainerDied","Data":"7bbd23ed6c4c3d2a38debf6b3c51acae629feec2fc8996e162e040dede0a4115"} Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.749009 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffnth" event={"ID":"316580e7-0956-44a5-8659-8a5f9903a5b2","Type":"ContainerStarted","Data":"a5c808c73ac24100f2191faaa3b5d1076c51f43f1e51e4459a3d0cdb69bbaddf"} Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.751376 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-rwqk2" event={"ID":"a19ced27-321a-4373-92a2-dc7d1ba64f91","Type":"ContainerStarted","Data":"7ffebeb90f48e58b5c97822eb9478ffc208200edde14c8bab1f232cc314ce05b"} Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.753173 4998 generic.go:358] "Generic (PLEG): container finished" podID="0576ce95-74b4-4a08-8607-85b622a77e83" containerID="04e4204a07c7e6b0989bb89be1d57e1bf84f936465ae5be30c2132f34ff51b5c" exitCode=0 Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.753219 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerDied","Data":"04e4204a07c7e6b0989bb89be1d57e1bf84f936465ae5be30c2132f34ff51b5c"} Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.753241 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerStarted","Data":"d76e4c095c1e522c1e3d4aee7ccfd2eb7c117a8a32db08382516ec320ac04aef"} Dec 08 18:57:34 crc kubenswrapper[4998]: I1208 18:57:34.806485 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-rwqk2" podStartSLOduration=4.208398209 podStartE2EDuration="4.806461217s" podCreationTimestamp="2025-12-08 18:57:30 +0000 UTC" firstStartedPulling="2025-12-08 18:57:31.697716788 +0000 UTC m=+355.345759478" lastFinishedPulling="2025-12-08 18:57:32.295779776 +0000 UTC m=+355.943822486" observedRunningTime="2025-12-08 18:57:34.793909594 +0000 UTC m=+358.441952324" watchObservedRunningTime="2025-12-08 18:57:34.806461217 +0000 UTC m=+358.454503907" Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.759857 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-fvmqj" event={"ID":"691c8ae1-1e98-4785-93a2-dfb245bbc808","Type":"ContainerStarted","Data":"c197acb6f27e1055125e846c49dc87e86911b68cfe5e498b6f61bcb1a0591851"} Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.761550 4998 generic.go:358] "Generic (PLEG): container finished" podID="316580e7-0956-44a5-8659-8a5f9903a5b2" containerID="a5c808c73ac24100f2191faaa3b5d1076c51f43f1e51e4459a3d0cdb69bbaddf" exitCode=0 Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.761572 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffnth" event={"ID":"316580e7-0956-44a5-8659-8a5f9903a5b2","Type":"ContainerDied","Data":"a5c808c73ac24100f2191faaa3b5d1076c51f43f1e51e4459a3d0cdb69bbaddf"} Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.765233 4998 generic.go:358] "Generic (PLEG): container finished" podID="0576ce95-74b4-4a08-8607-85b622a77e83" containerID="273c9af024aaaf69ec9201fca7a9a2585a4bd47cfc15e7c79f2b5d4c1e828127" exitCode=0 Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.765538 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerDied","Data":"273c9af024aaaf69ec9201fca7a9a2585a4bd47cfc15e7c79f2b5d4c1e828127"} Dec 08 18:57:35 crc kubenswrapper[4998]: I1208 18:57:35.794670 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-fvmqj" podStartSLOduration=4.151581105 podStartE2EDuration="4.794649893s" podCreationTimestamp="2025-12-08 18:57:31 +0000 UTC" firstStartedPulling="2025-12-08 18:57:32.70658486 +0000 UTC m=+356.354627550" lastFinishedPulling="2025-12-08 18:57:33.349653618 +0000 UTC m=+356.997696338" observedRunningTime="2025-12-08 18:57:35.793837861 +0000 UTC m=+359.441880571" watchObservedRunningTime="2025-12-08 18:57:35.794649893 +0000 UTC m=+359.442692583" Dec 08 18:57:36 crc kubenswrapper[4998]: I1208 18:57:36.771973 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-ffnth" event={"ID":"316580e7-0956-44a5-8659-8a5f9903a5b2","Type":"ContainerStarted","Data":"0ed4ff1bc043b7cf656fb0b5e0f90a70eb6fd5fef1e7a4574b8a9ce5e52e2fba"} Dec 08 18:57:36 crc kubenswrapper[4998]: I1208 18:57:36.775927 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerStarted","Data":"4c425f2bcff6326c4bb28ba8564268e72a2a9b998a3eff416f2be66328d3719b"} Dec 08 18:57:36 crc kubenswrapper[4998]: I1208 18:57:36.807290 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-ffnth" podStartSLOduration=4.154676939 podStartE2EDuration="4.807268818s" podCreationTimestamp="2025-12-08 18:57:32 +0000 UTC" firstStartedPulling="2025-12-08 18:57:33.730956394 +0000 UTC m=+357.378999084" lastFinishedPulling="2025-12-08 18:57:34.383548273 +0000 UTC m=+358.031590963" observedRunningTime="2025-12-08 18:57:36.801593032 +0000 UTC m=+360.449635742" watchObservedRunningTime="2025-12-08 18:57:36.807268818 +0000 UTC m=+360.455311518" Dec 08 18:57:36 crc kubenswrapper[4998]: I1208 18:57:36.833612 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-fzszn" podStartSLOduration=3.314066032 podStartE2EDuration="3.833560925s" podCreationTimestamp="2025-12-08 18:57:33 +0000 UTC" firstStartedPulling="2025-12-08 18:57:34.754076485 +0000 UTC m=+358.402119175" lastFinishedPulling="2025-12-08 18:57:35.273571378 +0000 UTC m=+358.921614068" observedRunningTime="2025-12-08 18:57:36.833145464 +0000 UTC m=+360.481188164" watchObservedRunningTime="2025-12-08 18:57:36.833560925 +0000 UTC m=+360.481603615" Dec 08 18:57:40 crc kubenswrapper[4998]: I1208 18:57:40.488501 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:40 crc kubenswrapper[4998]: I1208 18:57:40.488574 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:40 crc kubenswrapper[4998]: I1208 18:57:40.551374 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:41 crc kubenswrapper[4998]: I1208 18:57:41.075336 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-rwqk2" Dec 08 18:57:41 crc kubenswrapper[4998]: I1208 18:57:41.487843 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:41 crc kubenswrapper[4998]: I1208 18:57:41.488514 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:41 crc kubenswrapper[4998]: I1208 18:57:41.537396 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:41 crc kubenswrapper[4998]: I1208 18:57:41.859070 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-fvmqj" Dec 08 18:57:43 crc kubenswrapper[4998]: I1208 18:57:43.082747 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:43 crc kubenswrapper[4998]: I1208 18:57:43.082882 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:43 crc kubenswrapper[4998]: I1208 18:57:43.128703 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:43 crc kubenswrapper[4998]: I1208 18:57:43.860676 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-ffnth" Dec 08 18:57:44 crc kubenswrapper[4998]: I1208 18:57:44.110047 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:44 crc kubenswrapper[4998]: I1208 18:57:44.110839 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:44 crc kubenswrapper[4998]: I1208 18:57:44.177392 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:57:44 crc kubenswrapper[4998]: I1208 18:57:44.887425 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 18:58:31 crc kubenswrapper[4998]: I1208 18:58:31.235709 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:58:31 crc kubenswrapper[4998]: I1208 18:58:31.237374 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:01 crc kubenswrapper[4998]: I1208 18:59:01.233242 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:59:01 crc kubenswrapper[4998]: I1208 18:59:01.233904 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.233115 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.233675 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.233814 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.234441 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.234522 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a" gracePeriod=600 Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.726485 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a" exitCode=0 Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.726523 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a"} Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.727129 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a"} Dec 08 18:59:31 crc kubenswrapper[4998]: I1208 18:59:31.727379 4998 scope.go:117] "RemoveContainer" containerID="7d9e6412225887a5ef7e949a0f1b9c6ec74833f87061a1b499a02691c587c157" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.186092 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc"] Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.211913 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc"] Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.212131 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.215861 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.219831 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.312196 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.312544 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbsgr\" (UniqueName: \"kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.312735 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.413609 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbsgr\" (UniqueName: \"kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.413680 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.413753 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.414699 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.428415 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.434003 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbsgr\" (UniqueName: \"kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr\") pod \"collect-profiles-29420340-hblbc\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.532883 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:00 crc kubenswrapper[4998]: I1208 19:00:00.785501 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc"] Dec 08 19:00:00 crc kubenswrapper[4998]: W1208 19:00:00.802213 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1c04fe2_91fe_4a54_be23_58e4986c86cb.slice/crio-d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e WatchSource:0}: Error finding container d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e: Status 404 returned error can't find the container with id d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e Dec 08 19:00:01 crc kubenswrapper[4998]: I1208 19:00:01.504397 4998 generic.go:358] "Generic (PLEG): container finished" podID="a1c04fe2-91fe-4a54-be23-58e4986c86cb" containerID="3f9959fcd3519501d9adfb5a569c784092326643b53407a3c68c4bad049eeae7" exitCode=0 Dec 08 19:00:01 crc kubenswrapper[4998]: I1208 19:00:01.504500 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" event={"ID":"a1c04fe2-91fe-4a54-be23-58e4986c86cb","Type":"ContainerDied","Data":"3f9959fcd3519501d9adfb5a569c784092326643b53407a3c68c4bad049eeae7"} Dec 08 19:00:01 crc kubenswrapper[4998]: I1208 19:00:01.505924 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" event={"ID":"a1c04fe2-91fe-4a54-be23-58e4986c86cb","Type":"ContainerStarted","Data":"d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e"} Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.741512 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.749566 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume\") pod \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.749607 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbsgr\" (UniqueName: \"kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr\") pod \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.749706 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume\") pod \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\" (UID: \"a1c04fe2-91fe-4a54-be23-58e4986c86cb\") " Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.750619 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "a1c04fe2-91fe-4a54-be23-58e4986c86cb" (UID: "a1c04fe2-91fe-4a54-be23-58e4986c86cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.760614 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr" (OuterVolumeSpecName: "kube-api-access-vbsgr") pod "a1c04fe2-91fe-4a54-be23-58e4986c86cb" (UID: "a1c04fe2-91fe-4a54-be23-58e4986c86cb"). InnerVolumeSpecName "kube-api-access-vbsgr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.769332 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "a1c04fe2-91fe-4a54-be23-58e4986c86cb" (UID: "a1c04fe2-91fe-4a54-be23-58e4986c86cb"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.851511 4998 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/a1c04fe2-91fe-4a54-be23-58e4986c86cb-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.851930 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbsgr\" (UniqueName: \"kubernetes.io/projected/a1c04fe2-91fe-4a54-be23-58e4986c86cb-kube-api-access-vbsgr\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:02 crc kubenswrapper[4998]: I1208 19:00:02.852046 4998 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a1c04fe2-91fe-4a54-be23-58e4986c86cb-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:00:03 crc kubenswrapper[4998]: I1208 19:00:03.521347 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" event={"ID":"a1c04fe2-91fe-4a54-be23-58e4986c86cb","Type":"ContainerDied","Data":"d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e"} Dec 08 19:00:03 crc kubenswrapper[4998]: I1208 19:00:03.521389 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d67ca4782f878c7bc2ae89b4daca50adb0e26dfa3fdf469e36f710100d93b83e" Dec 08 19:00:03 crc kubenswrapper[4998]: I1208 19:00:03.521414 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420340-hblbc" Dec 08 19:01:31 crc kubenswrapper[4998]: I1208 19:01:31.233616 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:01:31 crc kubenswrapper[4998]: I1208 19:01:31.234464 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:01:37 crc kubenswrapper[4998]: I1208 19:01:37.545476 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:01:37 crc kubenswrapper[4998]: I1208 19:01:37.548258 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.763137 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr"] Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.763885 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="ovnkube-cluster-manager" containerID="cri-o://6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.764078 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="kube-rbac-proxy" containerID="cri-o://1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.983416 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7zr9"] Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984109 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-node" containerID="cri-o://e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984077 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-controller" containerID="cri-o://ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984271 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="sbdb" containerID="cri-o://8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984337 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984247 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-acl-logging" containerID="cri-o://f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984376 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="nbdb" containerID="cri-o://ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" gracePeriod=30 Dec 08 19:01:40 crc kubenswrapper[4998]: I1208 19:01:40.984549 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="northd" containerID="cri-o://9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" gracePeriod=30 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.004166 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.065549 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovnkube-controller" containerID="cri-o://ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" gracePeriod=30 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.088340 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6"] Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089026 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="kube-rbac-proxy" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089062 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="kube-rbac-proxy" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089091 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="ovnkube-cluster-manager" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089101 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="ovnkube-cluster-manager" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089121 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a1c04fe2-91fe-4a54-be23-58e4986c86cb" containerName="collect-profiles" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089130 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c04fe2-91fe-4a54-be23-58e4986c86cb" containerName="collect-profiles" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089305 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="ovnkube-cluster-manager" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089324 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="b8867028-389a-494e-b230-ed29201b63ca" containerName="kube-rbac-proxy" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.089336 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="a1c04fe2-91fe-4a54-be23-58e4986c86cb" containerName="collect-profiles" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.094546 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.155652 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides\") pod \"b8867028-389a-494e-b230-ed29201b63ca\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.155794 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj679\" (UniqueName: \"kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679\") pod \"b8867028-389a-494e-b230-ed29201b63ca\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.155908 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config\") pod \"b8867028-389a-494e-b230-ed29201b63ca\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.155962 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert\") pod \"b8867028-389a-494e-b230-ed29201b63ca\" (UID: \"b8867028-389a-494e-b230-ed29201b63ca\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.156087 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04baa9da-4e84-4493-9eba-92d838f417fd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.156154 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9m8z\" (UniqueName: \"kubernetes.io/projected/04baa9da-4e84-4493-9eba-92d838f417fd-kube-api-access-q9m8z\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.156172 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.156194 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.156957 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "b8867028-389a-494e-b230-ed29201b63ca" (UID: "b8867028-389a-494e-b230-ed29201b63ca"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.157344 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "b8867028-389a-494e-b230-ed29201b63ca" (UID: "b8867028-389a-494e-b230-ed29201b63ca"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.174959 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679" (OuterVolumeSpecName: "kube-api-access-pj679") pod "b8867028-389a-494e-b230-ed29201b63ca" (UID: "b8867028-389a-494e-b230-ed29201b63ca"). InnerVolumeSpecName "kube-api-access-pj679". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.175780 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "b8867028-389a-494e-b230-ed29201b63ca" (UID: "b8867028-389a-494e-b230-ed29201b63ca"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.222056 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.222111 4998 generic.go:358] "Generic (PLEG): container finished" podID="88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa" containerID="cf8eb80c08729777b822ab3758bd12c4310a87e1949d64d6bb3f074c45ec7fbd" exitCode=2 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.222208 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-72nfz" event={"ID":"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa","Type":"ContainerDied","Data":"cf8eb80c08729777b822ab3758bd12c4310a87e1949d64d6bb3f074c45ec7fbd"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.223093 4998 scope.go:117] "RemoveContainer" containerID="cf8eb80c08729777b822ab3758bd12c4310a87e1949d64d6bb3f074c45ec7fbd" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.241824 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-acl-logging/0.log" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242492 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-controller/0.log" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242868 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242893 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242906 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242916 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242924 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" exitCode=143 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.242932 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" exitCode=143 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243011 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243041 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243053 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243066 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243080 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.243090 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244721 4998 generic.go:358] "Generic (PLEG): container finished" podID="b8867028-389a-494e-b230-ed29201b63ca" containerID="6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244742 4998 generic.go:358] "Generic (PLEG): container finished" podID="b8867028-389a-494e-b230-ed29201b63ca" containerID="1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" exitCode=0 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244838 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerDied","Data":"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244860 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerDied","Data":"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244871 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" event={"ID":"b8867028-389a-494e-b230-ed29201b63ca","Type":"ContainerDied","Data":"de33d590dbedbf5e0c3d89c87fd218cbbc7fa0e27df0ce3227bda54457ff8636"} Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.244889 4998 scope.go:117] "RemoveContainer" containerID="6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.245071 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257029 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04baa9da-4e84-4493-9eba-92d838f417fd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257130 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q9m8z\" (UniqueName: \"kubernetes.io/projected/04baa9da-4e84-4493-9eba-92d838f417fd-kube-api-access-q9m8z\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257169 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257197 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257253 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257267 4998 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/b8867028-389a-494e-b230-ed29201b63ca-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257295 4998 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/b8867028-389a-494e-b230-ed29201b63ca-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.257308 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pj679\" (UniqueName: \"kubernetes.io/projected/b8867028-389a-494e-b230-ed29201b63ca-kube-api-access-pj679\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.258329 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.263210 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/04baa9da-4e84-4493-9eba-92d838f417fd-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.264021 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/04baa9da-4e84-4493-9eba-92d838f417fd-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.274317 4998 scope.go:117] "RemoveContainer" containerID="1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.288030 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr"] Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.289166 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9m8z\" (UniqueName: \"kubernetes.io/projected/04baa9da-4e84-4493-9eba-92d838f417fd-kube-api-access-q9m8z\") pod \"ovnkube-control-plane-97c9b6c48-klgv6\" (UID: \"04baa9da-4e84-4493-9eba-92d838f417fd\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.293296 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-ql7xr"] Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.342392 4998 scope.go:117] "RemoveContainer" containerID="6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" Dec 08 19:01:41 crc kubenswrapper[4998]: E1208 19:01:41.342920 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918\": container with ID starting with 6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918 not found: ID does not exist" containerID="6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343001 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918"} err="failed to get container status \"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918\": rpc error: code = NotFound desc = could not find container \"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918\": container with ID starting with 6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918 not found: ID does not exist" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343033 4998 scope.go:117] "RemoveContainer" containerID="1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" Dec 08 19:01:41 crc kubenswrapper[4998]: E1208 19:01:41.343399 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0\": container with ID starting with 1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0 not found: ID does not exist" containerID="1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343430 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0"} err="failed to get container status \"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0\": rpc error: code = NotFound desc = could not find container \"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0\": container with ID starting with 1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0 not found: ID does not exist" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343457 4998 scope.go:117] "RemoveContainer" containerID="6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343715 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918"} err="failed to get container status \"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918\": rpc error: code = NotFound desc = could not find container \"6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918\": container with ID starting with 6b207481e2957c7b6499a16b48a0560d9fd5f5dd5cf5c2f3717ac55997a6a918 not found: ID does not exist" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343746 4998 scope.go:117] "RemoveContainer" containerID="1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.343993 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0"} err="failed to get container status \"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0\": rpc error: code = NotFound desc = could not find container \"1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0\": container with ID starting with 1fcf6bf180f0982369b5654dff091da1f8a9a1a5fb1935d45b9cf5ab64015dd0 not found: ID does not exist" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.366071 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-acl-logging/0.log" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.366612 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-controller/0.log" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.367899 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.374321 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b8867028-389a-494e-b230-ed29201b63ca" path="/var/lib/kubelet/pods/b8867028-389a-494e-b230-ed29201b63ca/volumes" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.431246 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-zcct5"] Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432044 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="northd" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432150 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="northd" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432228 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovnkube-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432296 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovnkube-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432365 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="sbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432435 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="sbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432525 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kubecfg-setup" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432615 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kubecfg-setup" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432738 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-acl-logging" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432837 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-acl-logging" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432913 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-node" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.432977 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-node" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433057 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="nbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433183 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="nbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433261 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433403 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433501 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433614 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433807 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="northd" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433897 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-acl-logging" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.433981 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-ovn-metrics" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.434047 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovn-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.434144 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="nbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.434559 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="ovnkube-controller" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.434651 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="sbdb" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.434776 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerName="kube-rbac-proxy-node" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.440422 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461585 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461629 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461680 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lvgz\" (UniqueName: \"kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461712 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461740 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461740 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket" (OuterVolumeSpecName: "log-socket") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461775 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461813 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461848 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461873 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461895 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461930 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462002 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462031 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462059 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.461990 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462095 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462129 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462154 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462029 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462114 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462051 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462076 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462115 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462142 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash" (OuterVolumeSpecName: "host-slash") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462160 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462197 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462231 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log" (OuterVolumeSpecName: "node-log") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462231 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462252 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462272 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462274 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462290 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462312 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides\") pod \"fc7150c6-b180-4712-a5ed-6b25328d0118\" (UID: \"fc7150c6-b180-4712-a5ed-6b25328d0118\") " Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462585 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462667 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.462792 4998 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-systemd-units\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463034 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463239 4998 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463262 4998 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-ovn\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463275 4998 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463288 4998 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-run-netns\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463300 4998 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-kubelet\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463311 4998 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463322 4998 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-slash\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463332 4998 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-node-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463339 4998 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-netd\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463348 4998 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463357 4998 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463367 4998 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-host-cni-bin\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.463375 4998 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-log-socket\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.470872 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.476196 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz" (OuterVolumeSpecName: "kube-api-access-9lvgz") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "kube-api-access-9lvgz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.476319 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.482421 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "fc7150c6-b180-4712-a5ed-6b25328d0118" (UID: "fc7150c6-b180-4712-a5ed-6b25328d0118"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564488 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-log-socket\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564536 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-etc-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564564 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-script-lib\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564665 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-env-overrides\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564745 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-ovn\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564841 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564870 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-bin\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564906 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7dks\" (UniqueName: \"kubernetes.io/projected/82575557-7c3a-4f8d-ad07-50a1a07cb731-kube-api-access-c7dks\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564939 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-kubelet\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.564978 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-systemd-units\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565001 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565027 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovn-node-metrics-cert\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565057 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-netns\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565085 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-systemd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565136 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-netd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565169 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565192 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-slash\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565218 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-node-log\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565239 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-config\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565258 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-var-lib-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565307 4998 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/fc7150c6-b180-4712-a5ed-6b25328d0118-run-systemd\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565321 4998 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/fc7150c6-b180-4712-a5ed-6b25328d0118-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565333 4998 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-env-overrides\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565359 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9lvgz\" (UniqueName: \"kubernetes.io/projected/fc7150c6-b180-4712-a5ed-6b25328d0118-kube-api-access-9lvgz\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565380 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.565393 4998 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/fc7150c6-b180-4712-a5ed-6b25328d0118-ovnkube-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666642 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-ovn\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666749 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666778 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-bin\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666796 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666824 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7dks\" (UniqueName: \"kubernetes.io/projected/82575557-7c3a-4f8d-ad07-50a1a07cb731-kube-api-access-c7dks\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666824 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-bin\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666775 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-ovn\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666885 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-kubelet\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666912 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-systemd-units\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666913 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-kubelet\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.666963 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667015 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-systemd-units\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667033 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667063 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovn-node-metrics-cert\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667088 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-netns\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667125 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-systemd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667205 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-run-systemd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667212 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-run-netns\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667231 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-netd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667279 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-cni-netd\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667298 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667373 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667414 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-slash\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667443 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-node-log\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667518 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-node-log\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667529 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-config\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667551 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-var-lib-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667492 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-host-slash\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667612 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-log-socket\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667672 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-var-lib-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667694 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-log-socket\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667711 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-etc-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667738 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-script-lib\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.668406 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-config\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.668455 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovnkube-script-lib\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.667768 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/82575557-7c3a-4f8d-ad07-50a1a07cb731-etc-openvswitch\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.668551 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-env-overrides\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.669507 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/82575557-7c3a-4f8d-ad07-50a1a07cb731-env-overrides\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.670865 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/82575557-7c3a-4f8d-ad07-50a1a07cb731-ovn-node-metrics-cert\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.684162 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7dks\" (UniqueName: \"kubernetes.io/projected/82575557-7c3a-4f8d-ad07-50a1a07cb731-kube-api-access-c7dks\") pod \"ovnkube-node-zcct5\" (UID: \"82575557-7c3a-4f8d-ad07-50a1a07cb731\") " pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.755905 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:41 crc kubenswrapper[4998]: W1208 19:01:41.782596 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod82575557_7c3a_4f8d_ad07_50a1a07cb731.slice/crio-1356d68236ad88c11d0ac933c728328695f0b7ce5c644af263b8afd910081df2 WatchSource:0}: Error finding container 1356d68236ad88c11d0ac933c728328695f0b7ce5c644af263b8afd910081df2: Status 404 returned error can't find the container with id 1356d68236ad88c11d0ac933c728328695f0b7ce5c644af263b8afd910081df2 Dec 08 19:01:41 crc kubenswrapper[4998]: I1208 19:01:41.830739 4998 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.260248 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-acl-logging/0.log" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.260766 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-h7zr9_fc7150c6-b180-4712-a5ed-6b25328d0118/ovn-controller/0.log" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261103 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" exitCode=0 Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261119 4998 generic.go:358] "Generic (PLEG): container finished" podID="fc7150c6-b180-4712-a5ed-6b25328d0118" containerID="8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" exitCode=0 Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261173 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261201 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261211 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" event={"ID":"fc7150c6-b180-4712-a5ed-6b25328d0118","Type":"ContainerDied","Data":"2bb7de2650f3dfcf05791935a887950f2a4579e0ca79457182f81ee6ff637412"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261227 4998 scope.go:117] "RemoveContainer" containerID="ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.261371 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-h7zr9" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.267771 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" event={"ID":"04baa9da-4e84-4493-9eba-92d838f417fd","Type":"ContainerStarted","Data":"ad2be35bf184351f1ad858c22ca503b8dc9abeb84830ddb0abe4dbcebb5f9d51"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.267896 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" event={"ID":"04baa9da-4e84-4493-9eba-92d838f417fd","Type":"ContainerStarted","Data":"043f1c5c478f0014912d09653f12b5ff7530d2f79280e98d5c4ab7549f6452b4"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.267908 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" event={"ID":"04baa9da-4e84-4493-9eba-92d838f417fd","Type":"ContainerStarted","Data":"282aa329396e69a2690b0adcdf49c88e3efe2ec6c06d388f85e194d93e2e665a"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.271222 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.271404 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-72nfz" event={"ID":"88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa","Type":"ContainerStarted","Data":"5613d73c6a286977ce30e331ab75c171b8c406bfaecbe18b02d6f6a357638627"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.273666 4998 generic.go:358] "Generic (PLEG): container finished" podID="82575557-7c3a-4f8d-ad07-50a1a07cb731" containerID="b34a5a5c642dc87c8dbd16cf9dfa22736a3d02282e87bf17b1e8e6aac2be74c0" exitCode=0 Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.273810 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerDied","Data":"b34a5a5c642dc87c8dbd16cf9dfa22736a3d02282e87bf17b1e8e6aac2be74c0"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.273841 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"1356d68236ad88c11d0ac933c728328695f0b7ce5c644af263b8afd910081df2"} Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.296859 4998 scope.go:117] "RemoveContainer" containerID="8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.324609 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-klgv6" podStartSLOduration=2.324582414 podStartE2EDuration="2.324582414s" podCreationTimestamp="2025-12-08 19:01:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:01:42.294446864 +0000 UTC m=+605.942489574" watchObservedRunningTime="2025-12-08 19:01:42.324582414 +0000 UTC m=+605.972625104" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.333907 4998 scope.go:117] "RemoveContainer" containerID="ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.359715 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7zr9"] Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.364604 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-h7zr9"] Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.375958 4998 scope.go:117] "RemoveContainer" containerID="9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.420002 4998 scope.go:117] "RemoveContainer" containerID="6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.438103 4998 scope.go:117] "RemoveContainer" containerID="e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.472429 4998 scope.go:117] "RemoveContainer" containerID="f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.492075 4998 scope.go:117] "RemoveContainer" containerID="ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.512784 4998 scope.go:117] "RemoveContainer" containerID="40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.543954 4998 scope.go:117] "RemoveContainer" containerID="ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.544420 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4\": container with ID starting with ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4 not found: ID does not exist" containerID="ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.544465 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4"} err="failed to get container status \"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4\": rpc error: code = NotFound desc = could not find container \"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4\": container with ID starting with ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.544492 4998 scope.go:117] "RemoveContainer" containerID="8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.544822 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10\": container with ID starting with 8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10 not found: ID does not exist" containerID="8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.544858 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10"} err="failed to get container status \"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10\": rpc error: code = NotFound desc = could not find container \"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10\": container with ID starting with 8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.544884 4998 scope.go:117] "RemoveContainer" containerID="ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.545161 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e\": container with ID starting with ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e not found: ID does not exist" containerID="ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545198 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e"} err="failed to get container status \"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e\": rpc error: code = NotFound desc = could not find container \"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e\": container with ID starting with ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545214 4998 scope.go:117] "RemoveContainer" containerID="9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.545422 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16\": container with ID starting with 9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16 not found: ID does not exist" containerID="9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545448 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16"} err="failed to get container status \"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16\": rpc error: code = NotFound desc = could not find container \"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16\": container with ID starting with 9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545465 4998 scope.go:117] "RemoveContainer" containerID="6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.545707 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb\": container with ID starting with 6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb not found: ID does not exist" containerID="6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545739 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb"} err="failed to get container status \"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb\": rpc error: code = NotFound desc = could not find container \"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb\": container with ID starting with 6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545759 4998 scope.go:117] "RemoveContainer" containerID="e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.545950 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5\": container with ID starting with e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5 not found: ID does not exist" containerID="e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545975 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5"} err="failed to get container status \"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5\": rpc error: code = NotFound desc = could not find container \"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5\": container with ID starting with e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.545993 4998 scope.go:117] "RemoveContainer" containerID="f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.546205 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d\": container with ID starting with f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d not found: ID does not exist" containerID="f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546247 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d"} err="failed to get container status \"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d\": rpc error: code = NotFound desc = could not find container \"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d\": container with ID starting with f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546265 4998 scope.go:117] "RemoveContainer" containerID="ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.546456 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8\": container with ID starting with ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8 not found: ID does not exist" containerID="ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546486 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8"} err="failed to get container status \"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8\": rpc error: code = NotFound desc = could not find container \"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8\": container with ID starting with ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546504 4998 scope.go:117] "RemoveContainer" containerID="40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4" Dec 08 19:01:42 crc kubenswrapper[4998]: E1208 19:01:42.546664 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4\": container with ID starting with 40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4 not found: ID does not exist" containerID="40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546722 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4"} err="failed to get container status \"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4\": rpc error: code = NotFound desc = could not find container \"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4\": container with ID starting with 40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546739 4998 scope.go:117] "RemoveContainer" containerID="ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546913 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4"} err="failed to get container status \"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4\": rpc error: code = NotFound desc = could not find container \"ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4\": container with ID starting with ecb6001467fb961a883e70792a116c6f348166d30792b868e53b3d7782faf5e4 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.546928 4998 scope.go:117] "RemoveContainer" containerID="8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547093 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10"} err="failed to get container status \"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10\": rpc error: code = NotFound desc = could not find container \"8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10\": container with ID starting with 8d4e48a23af89b8d17219e49517459ebc89e0565a5e54062806bc61337717a10 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547123 4998 scope.go:117] "RemoveContainer" containerID="ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547300 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e"} err="failed to get container status \"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e\": rpc error: code = NotFound desc = could not find container \"ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e\": container with ID starting with ed0b78208dc5d6755e19625754549f625983922b7ac7879c809323e71e2c2c9e not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547318 4998 scope.go:117] "RemoveContainer" containerID="9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547542 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16"} err="failed to get container status \"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16\": rpc error: code = NotFound desc = could not find container \"9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16\": container with ID starting with 9ea55db9612e387ead5db2ff33e01bdf3ab59f7df4e67bd5ddca399ed50e1e16 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547564 4998 scope.go:117] "RemoveContainer" containerID="6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547737 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb"} err="failed to get container status \"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb\": rpc error: code = NotFound desc = could not find container \"6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb\": container with ID starting with 6ee3f7d5644dedcefadc56283ee68e04b20548022ba832197a0b64ab995f46fb not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547763 4998 scope.go:117] "RemoveContainer" containerID="e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547935 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5"} err="failed to get container status \"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5\": rpc error: code = NotFound desc = could not find container \"e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5\": container with ID starting with e64b017a48b3ff9adb58b0ef2e74f498ddce919d21a1e8053568c07aa11ce5d5 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.547956 4998 scope.go:117] "RemoveContainer" containerID="f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.548103 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d"} err="failed to get container status \"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d\": rpc error: code = NotFound desc = could not find container \"f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d\": container with ID starting with f9ae44d5452238ea8f98040fc3148a50dce1b681dd3fed80cea2a9046b6ed52d not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.548119 4998 scope.go:117] "RemoveContainer" containerID="ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.548247 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8"} err="failed to get container status \"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8\": rpc error: code = NotFound desc = could not find container \"ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8\": container with ID starting with ef8d27356a576b12e65189acdd13a1eb535d3a624bfaa2c4e5152a37b66097b8 not found: ID does not exist" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.548258 4998 scope.go:117] "RemoveContainer" containerID="40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4" Dec 08 19:01:42 crc kubenswrapper[4998]: I1208 19:01:42.548424 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4"} err="failed to get container status \"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4\": rpc error: code = NotFound desc = could not find container \"40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4\": container with ID starting with 40b45fa4f634a35d2732b9554e2cfbdab6bb442becd4a252166e932584ff76f4 not found: ID does not exist" Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.461774 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc7150c6-b180-4712-a5ed-6b25328d0118" path="/var/lib/kubelet/pods/fc7150c6-b180-4712-a5ed-6b25328d0118/volumes" Dec 08 19:01:44 crc kubenswrapper[4998]: E1208 19:01:44.463065 4998 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.097s" Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.465249 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"97e890770b794bf99f71e5b5312955681bbce8d57772f4e871f3abace21c81d4"} Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.465288 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"500d4e4918c870e5d1945c42e3a80482edbeff607fdcc0b7c7e9fcc405ed5c93"} Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.465302 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"178d16435e3977f011567eafecb48a820aca231a5f7b3757cfd78428c5d9a59d"} Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.465315 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"6057b87412053c248a9599f9205ed097f4d7c27cf0451511ad983a159782b0dc"} Dec 08 19:01:44 crc kubenswrapper[4998]: I1208 19:01:44.465327 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"37661271aa1b4da1fecad88bf04ff178205342b6053a574fa15d93c0e1310460"} Dec 08 19:01:45 crc kubenswrapper[4998]: I1208 19:01:45.477772 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"b9514fab987e90da4e32fca7ef1ae3fe2a5c5caebc32f1946276569e4f948cc9"} Dec 08 19:01:47 crc kubenswrapper[4998]: I1208 19:01:47.495420 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"52b78c48ec5a312f1d61475191320758e714a50fb91ca5dd588ca96fead7917d"} Dec 08 19:01:50 crc kubenswrapper[4998]: I1208 19:01:50.527352 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" event={"ID":"82575557-7c3a-4f8d-ad07-50a1a07cb731","Type":"ContainerStarted","Data":"a5f05c270835f913191844773ffc8b9ed5a22eac3230e7f409deffb0039e31a0"} Dec 08 19:01:50 crc kubenswrapper[4998]: I1208 19:01:50.528002 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:50 crc kubenswrapper[4998]: I1208 19:01:50.528022 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:50 crc kubenswrapper[4998]: I1208 19:01:50.574254 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:50 crc kubenswrapper[4998]: I1208 19:01:50.576742 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" podStartSLOduration=9.576723557 podStartE2EDuration="9.576723557s" podCreationTimestamp="2025-12-08 19:01:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:01:50.572119969 +0000 UTC m=+614.220162659" watchObservedRunningTime="2025-12-08 19:01:50.576723557 +0000 UTC m=+614.224766267" Dec 08 19:01:51 crc kubenswrapper[4998]: I1208 19:01:51.539380 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:51 crc kubenswrapper[4998]: I1208 19:01:51.579944 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:01:52 crc kubenswrapper[4998]: I1208 19:01:52.496575 4998 ???:1] "http: TLS handshake error from 192.168.126.11:55440: no serving certificate available for the kubelet" Dec 08 19:02:01 crc kubenswrapper[4998]: I1208 19:02:01.233170 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:02:01 crc kubenswrapper[4998]: I1208 19:02:01.234020 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:02:23 crc kubenswrapper[4998]: I1208 19:02:23.571285 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-zcct5" Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.232765 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.233095 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.233156 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.234035 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.234883 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a" gracePeriod=600 Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.831185 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a" exitCode=0 Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.831335 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a"} Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.831386 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791"} Dec 08 19:02:31 crc kubenswrapper[4998]: I1208 19:02:31.831422 4998 scope.go:117] "RemoveContainer" containerID="d2f2eaca5a5842e093b639eb96abc33ec6a8a19f824e70f15725948e6787494a" Dec 08 19:02:48 crc kubenswrapper[4998]: I1208 19:02:48.819856 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 19:02:48 crc kubenswrapper[4998]: I1208 19:02:48.820729 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-fzszn" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="registry-server" containerID="cri-o://4c425f2bcff6326c4bb28ba8564268e72a2a9b998a3eff416f2be66328d3719b" gracePeriod=30 Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.013538 4998 generic.go:358] "Generic (PLEG): container finished" podID="0576ce95-74b4-4a08-8607-85b622a77e83" containerID="4c425f2bcff6326c4bb28ba8564268e72a2a9b998a3eff416f2be66328d3719b" exitCode=0 Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.013736 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerDied","Data":"4c425f2bcff6326c4bb28ba8564268e72a2a9b998a3eff416f2be66328d3719b"} Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.192141 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.279737 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content\") pod \"0576ce95-74b4-4a08-8607-85b622a77e83\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.279825 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities\") pod \"0576ce95-74b4-4a08-8607-85b622a77e83\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.279854 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgsq2\" (UniqueName: \"kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2\") pod \"0576ce95-74b4-4a08-8607-85b622a77e83\" (UID: \"0576ce95-74b4-4a08-8607-85b622a77e83\") " Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.281179 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities" (OuterVolumeSpecName: "utilities") pod "0576ce95-74b4-4a08-8607-85b622a77e83" (UID: "0576ce95-74b4-4a08-8607-85b622a77e83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.290955 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2" (OuterVolumeSpecName: "kube-api-access-qgsq2") pod "0576ce95-74b4-4a08-8607-85b622a77e83" (UID: "0576ce95-74b4-4a08-8607-85b622a77e83"). InnerVolumeSpecName "kube-api-access-qgsq2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.291421 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0576ce95-74b4-4a08-8607-85b622a77e83" (UID: "0576ce95-74b4-4a08-8607-85b622a77e83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.381210 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.381260 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0576ce95-74b4-4a08-8607-85b622a77e83-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.381276 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgsq2\" (UniqueName: \"kubernetes.io/projected/0576ce95-74b4-4a08-8607-85b622a77e83-kube-api-access-qgsq2\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.850920 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-942zv"] Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852367 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="extract-utilities" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852470 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="extract-utilities" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852560 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="registry-server" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852616 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="registry-server" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852674 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="extract-content" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852751 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="extract-content" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.852911 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" containerName="registry-server" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.859973 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.882951 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-942zv"] Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991375 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c67c825d-1b81-417f-adf0-2d26932ff0ea-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991443 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991535 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-certificates\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991633 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-trusted-ca\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991665 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-bound-sa-token\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991762 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c67c825d-1b81-417f-adf0-2d26932ff0ea-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991812 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-tls\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:49 crc kubenswrapper[4998]: I1208 19:02:49.991854 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-klk2v\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-kube-api-access-klk2v\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.023678 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-fzszn" event={"ID":"0576ce95-74b4-4a08-8607-85b622a77e83","Type":"ContainerDied","Data":"d76e4c095c1e522c1e3d4aee7ccfd2eb7c117a8a32db08382516ec320ac04aef"} Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.023836 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-fzszn" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.024295 4998 scope.go:117] "RemoveContainer" containerID="4c425f2bcff6326c4bb28ba8564268e72a2a9b998a3eff416f2be66328d3719b" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.027679 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.047105 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.050822 4998 scope.go:117] "RemoveContainer" containerID="273c9af024aaaf69ec9201fca7a9a2585a4bd47cfc15e7c79f2b5d4c1e828127" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.052313 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-fzszn"] Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.066522 4998 scope.go:117] "RemoveContainer" containerID="04e4204a07c7e6b0989bb89be1d57e1bf84f936465ae5be30c2132f34ff51b5c" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097287 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-certificates\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097381 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-trusted-ca\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097410 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-bound-sa-token\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097451 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c67c825d-1b81-417f-adf0-2d26932ff0ea-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097494 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-tls\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097521 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-klk2v\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-kube-api-access-klk2v\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.097567 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c67c825d-1b81-417f-adf0-2d26932ff0ea-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.098772 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/c67c825d-1b81-417f-adf0-2d26932ff0ea-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.099139 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-certificates\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.099866 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c67c825d-1b81-417f-adf0-2d26932ff0ea-trusted-ca\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.106029 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/c67c825d-1b81-417f-adf0-2d26932ff0ea-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.112992 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-registry-tls\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.119186 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-bound-sa-token\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.122497 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-klk2v\" (UniqueName: \"kubernetes.io/projected/c67c825d-1b81-417f-adf0-2d26932ff0ea-kube-api-access-klk2v\") pod \"image-registry-5d9d95bf5b-942zv\" (UID: \"c67c825d-1b81-417f-adf0-2d26932ff0ea\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.176999 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:50 crc kubenswrapper[4998]: I1208 19:02:50.627072 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-942zv"] Dec 08 19:02:50 crc kubenswrapper[4998]: W1208 19:02:50.631673 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc67c825d_1b81_417f_adf0_2d26932ff0ea.slice/crio-39fd810d343f6416ca35e2eccd276b76e29836ec12b93d72179de8debb506c0b WatchSource:0}: Error finding container 39fd810d343f6416ca35e2eccd276b76e29836ec12b93d72179de8debb506c0b: Status 404 returned error can't find the container with id 39fd810d343f6416ca35e2eccd276b76e29836ec12b93d72179de8debb506c0b Dec 08 19:02:51 crc kubenswrapper[4998]: I1208 19:02:51.030957 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" event={"ID":"c67c825d-1b81-417f-adf0-2d26932ff0ea","Type":"ContainerStarted","Data":"326c91173c06235b9860af96bab0c8082f47647e94e68a49a9522f8ebb126e40"} Dec 08 19:02:51 crc kubenswrapper[4998]: I1208 19:02:51.031308 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" event={"ID":"c67c825d-1b81-417f-adf0-2d26932ff0ea","Type":"ContainerStarted","Data":"39fd810d343f6416ca35e2eccd276b76e29836ec12b93d72179de8debb506c0b"} Dec 08 19:02:51 crc kubenswrapper[4998]: I1208 19:02:51.031351 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:02:51 crc kubenswrapper[4998]: I1208 19:02:51.062437 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" podStartSLOduration=2.062416848 podStartE2EDuration="2.062416848s" podCreationTimestamp="2025-12-08 19:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:02:51.056469448 +0000 UTC m=+674.704512178" watchObservedRunningTime="2025-12-08 19:02:51.062416848 +0000 UTC m=+674.710459558" Dec 08 19:02:51 crc kubenswrapper[4998]: I1208 19:02:51.372646 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0576ce95-74b4-4a08-8607-85b622a77e83" path="/var/lib/kubelet/pods/0576ce95-74b4-4a08-8607-85b622a77e83/volumes" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.546971 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h"] Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.568340 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h"] Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.568536 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.571074 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.628959 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.629297 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hrb6d\" (UniqueName: \"kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.629468 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.730447 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.730858 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.731037 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hrb6d\" (UniqueName: \"kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.731086 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.731374 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.755866 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hrb6d\" (UniqueName: \"kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d\") pod \"6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:52 crc kubenswrapper[4998]: I1208 19:02:52.885826 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:53 crc kubenswrapper[4998]: I1208 19:02:53.156473 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h"] Dec 08 19:02:53 crc kubenswrapper[4998]: W1208 19:02:53.160618 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod57e56791_3969_46d8_8bd4_4fa2a648db5c.slice/crio-cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e WatchSource:0}: Error finding container cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e: Status 404 returned error can't find the container with id cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e Dec 08 19:02:54 crc kubenswrapper[4998]: I1208 19:02:54.067127 4998 generic.go:358] "Generic (PLEG): container finished" podID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerID="ba8c4ed839f97dee9c3d7f7fb11672d9d387caea0772cc1b1c68623c9c05fdce" exitCode=0 Dec 08 19:02:54 crc kubenswrapper[4998]: I1208 19:02:54.067199 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" event={"ID":"57e56791-3969-46d8-8bd4-4fa2a648db5c","Type":"ContainerDied","Data":"ba8c4ed839f97dee9c3d7f7fb11672d9d387caea0772cc1b1c68623c9c05fdce"} Dec 08 19:02:54 crc kubenswrapper[4998]: I1208 19:02:54.067576 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" event={"ID":"57e56791-3969-46d8-8bd4-4fa2a648db5c","Type":"ContainerStarted","Data":"cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e"} Dec 08 19:02:56 crc kubenswrapper[4998]: I1208 19:02:56.082285 4998 generic.go:358] "Generic (PLEG): container finished" podID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerID="c8b966a72794363949c5b49900e0e1ad0df2fa9614116aeb6e5588931f7d1148" exitCode=0 Dec 08 19:02:56 crc kubenswrapper[4998]: I1208 19:02:56.082453 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" event={"ID":"57e56791-3969-46d8-8bd4-4fa2a648db5c","Type":"ContainerDied","Data":"c8b966a72794363949c5b49900e0e1ad0df2fa9614116aeb6e5588931f7d1148"} Dec 08 19:02:57 crc kubenswrapper[4998]: I1208 19:02:57.094486 4998 generic.go:358] "Generic (PLEG): container finished" podID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerID="6d00df7d86e0aa31b33db3428302f1826bce343cba4d822da31d99f2707f3aa3" exitCode=0 Dec 08 19:02:57 crc kubenswrapper[4998]: I1208 19:02:57.094591 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" event={"ID":"57e56791-3969-46d8-8bd4-4fa2a648db5c","Type":"ContainerDied","Data":"6d00df7d86e0aa31b33db3428302f1826bce343cba4d822da31d99f2707f3aa3"} Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.369253 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.419506 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle\") pod \"57e56791-3969-46d8-8bd4-4fa2a648db5c\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.419586 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hrb6d\" (UniqueName: \"kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d\") pod \"57e56791-3969-46d8-8bd4-4fa2a648db5c\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.419657 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util\") pod \"57e56791-3969-46d8-8bd4-4fa2a648db5c\" (UID: \"57e56791-3969-46d8-8bd4-4fa2a648db5c\") " Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.423101 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle" (OuterVolumeSpecName: "bundle") pod "57e56791-3969-46d8-8bd4-4fa2a648db5c" (UID: "57e56791-3969-46d8-8bd4-4fa2a648db5c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.426141 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.428893 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d" (OuterVolumeSpecName: "kube-api-access-hrb6d") pod "57e56791-3969-46d8-8bd4-4fa2a648db5c" (UID: "57e56791-3969-46d8-8bd4-4fa2a648db5c"). InnerVolumeSpecName "kube-api-access-hrb6d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.431098 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util" (OuterVolumeSpecName: "util") pod "57e56791-3969-46d8-8bd4-4fa2a648db5c" (UID: "57e56791-3969-46d8-8bd4-4fa2a648db5c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.527537 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hrb6d\" (UniqueName: \"kubernetes.io/projected/57e56791-3969-46d8-8bd4-4fa2a648db5c-kube-api-access-hrb6d\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.527575 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/57e56791-3969-46d8-8bd4-4fa2a648db5c-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.727496 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj"] Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728427 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="extract" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728477 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="extract" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728498 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="util" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728509 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="util" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728561 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="pull" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728570 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="pull" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.728755 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="57e56791-3969-46d8-8bd4-4fa2a648db5c" containerName="extract" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.743497 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.758994 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj"] Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.831909 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k49l9\" (UniqueName: \"kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.831992 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.832113 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.932936 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.933038 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.933425 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-k49l9\" (UniqueName: \"kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.933493 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.933750 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:58 crc kubenswrapper[4998]: I1208 19:02:58.956591 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-k49l9\" (UniqueName: \"kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:59 crc kubenswrapper[4998]: I1208 19:02:59.062629 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:02:59 crc kubenswrapper[4998]: I1208 19:02:59.108730 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" event={"ID":"57e56791-3969-46d8-8bd4-4fa2a648db5c","Type":"ContainerDied","Data":"cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e"} Dec 08 19:02:59 crc kubenswrapper[4998]: I1208 19:02:59.108808 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6c372a8d094fad7255d3bbeabb4914bd2356af7b203a2d2176be1c92104nk9h" Dec 08 19:02:59 crc kubenswrapper[4998]: I1208 19:02:59.108817 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc4cc250ff5f656a49a939b4358eb73d7ee2f74963ce2f359addeec9d5a6422e" Dec 08 19:02:59 crc kubenswrapper[4998]: I1208 19:02:59.307031 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj"] Dec 08 19:02:59 crc kubenswrapper[4998]: W1208 19:02:59.311494 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70983796_bd01_4b7c_ae23_0642eb25d6c1.slice/crio-d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93 WatchSource:0}: Error finding container d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93: Status 404 returned error can't find the container with id d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93 Dec 08 19:03:00 crc kubenswrapper[4998]: I1208 19:03:00.119842 4998 generic.go:358] "Generic (PLEG): container finished" podID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerID="91ec1bda330f8f13a90c9b56c752ab93a8017115818584641fbace5e03e73573" exitCode=0 Dec 08 19:03:00 crc kubenswrapper[4998]: I1208 19:03:00.120012 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" event={"ID":"70983796-bd01-4b7c-ae23-0642eb25d6c1","Type":"ContainerDied","Data":"91ec1bda330f8f13a90c9b56c752ab93a8017115818584641fbace5e03e73573"} Dec 08 19:03:00 crc kubenswrapper[4998]: I1208 19:03:00.120095 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" event={"ID":"70983796-bd01-4b7c-ae23-0642eb25d6c1","Type":"ContainerStarted","Data":"d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93"} Dec 08 19:03:01 crc kubenswrapper[4998]: I1208 19:03:01.128768 4998 generic.go:358] "Generic (PLEG): container finished" podID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerID="95a7c9a7628b122c0200cafb21738422221dd9d872dc9314ddc753ff84a6ba29" exitCode=0 Dec 08 19:03:01 crc kubenswrapper[4998]: I1208 19:03:01.128842 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" event={"ID":"70983796-bd01-4b7c-ae23-0642eb25d6c1","Type":"ContainerDied","Data":"95a7c9a7628b122c0200cafb21738422221dd9d872dc9314ddc753ff84a6ba29"} Dec 08 19:03:02 crc kubenswrapper[4998]: I1208 19:03:02.138078 4998 generic.go:358] "Generic (PLEG): container finished" podID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerID="6d8b3681f5587156508a202f393bb06561acfa5c7fcda2dfdd916e3160358007" exitCode=0 Dec 08 19:03:02 crc kubenswrapper[4998]: I1208 19:03:02.138125 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" event={"ID":"70983796-bd01-4b7c-ae23-0642eb25d6c1","Type":"ContainerDied","Data":"6d8b3681f5587156508a202f393bb06561acfa5c7fcda2dfdd916e3160358007"} Dec 08 19:03:02 crc kubenswrapper[4998]: I1208 19:03:02.849264 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6"] Dec 08 19:03:02 crc kubenswrapper[4998]: I1208 19:03:02.867177 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:02 crc kubenswrapper[4998]: I1208 19:03:02.868878 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6"] Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.010800 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwrkv\" (UniqueName: \"kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.010870 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.011017 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.112245 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.112321 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.112411 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pwrkv\" (UniqueName: \"kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.112818 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.112881 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.163774 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pwrkv\" (UniqueName: \"kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.183239 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.542514 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.632801 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util\") pod \"70983796-bd01-4b7c-ae23-0642eb25d6c1\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.632890 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle\") pod \"70983796-bd01-4b7c-ae23-0642eb25d6c1\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.632931 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k49l9\" (UniqueName: \"kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9\") pod \"70983796-bd01-4b7c-ae23-0642eb25d6c1\" (UID: \"70983796-bd01-4b7c-ae23-0642eb25d6c1\") " Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.634129 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle" (OuterVolumeSpecName: "bundle") pod "70983796-bd01-4b7c-ae23-0642eb25d6c1" (UID: "70983796-bd01-4b7c-ae23-0642eb25d6c1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.641904 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9" (OuterVolumeSpecName: "kube-api-access-k49l9") pod "70983796-bd01-4b7c-ae23-0642eb25d6c1" (UID: "70983796-bd01-4b7c-ae23-0642eb25d6c1"). InnerVolumeSpecName "kube-api-access-k49l9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:03 crc kubenswrapper[4998]: W1208 19:03:03.653428 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3ea85968_8256_409e_8aa2_e7671f116fd2.slice/crio-3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e WatchSource:0}: Error finding container 3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e: Status 404 returned error can't find the container with id 3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.661889 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util" (OuterVolumeSpecName: "util") pod "70983796-bd01-4b7c-ae23-0642eb25d6c1" (UID: "70983796-bd01-4b7c-ae23-0642eb25d6c1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.736845 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6"] Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.737146 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.737182 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/70983796-bd01-4b7c-ae23-0642eb25d6c1-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:03 crc kubenswrapper[4998]: I1208 19:03:03.737191 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-k49l9\" (UniqueName: \"kubernetes.io/projected/70983796-bd01-4b7c-ae23-0642eb25d6c1-kube-api-access-k49l9\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.151363 4998 generic.go:358] "Generic (PLEG): container finished" podID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerID="887c1090516d71f976346ab0d79f8d9f97c43153173a13dc2da1227cb882c7cf" exitCode=0 Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.151459 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" event={"ID":"3ea85968-8256-409e-8aa2-e7671f116fd2","Type":"ContainerDied","Data":"887c1090516d71f976346ab0d79f8d9f97c43153173a13dc2da1227cb882c7cf"} Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.151779 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" event={"ID":"3ea85968-8256-409e-8aa2-e7671f116fd2","Type":"ContainerStarted","Data":"3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e"} Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.154845 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" event={"ID":"70983796-bd01-4b7c-ae23-0642eb25d6c1","Type":"ContainerDied","Data":"d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93"} Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.154891 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d37d69ef47ecf42a3b9c61a15f1347b98c24f048d36809c36a247e121a93bd93" Dec 08 19:03:04 crc kubenswrapper[4998]: I1208 19:03:04.154941 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5ejgxnj" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.191370 4998 generic.go:358] "Generic (PLEG): container finished" podID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerID="01d37372aca0c4e8349d19cd12b1e7ec4ab0989d9d9ea83736acd8b5bdd1226c" exitCode=0 Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.191584 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" event={"ID":"3ea85968-8256-409e-8aa2-e7671f116fd2","Type":"ContainerDied","Data":"01d37372aca0c4e8349d19cd12b1e7ec4ab0989d9d9ea83736acd8b5bdd1226c"} Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.906039 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n9qc6"] Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907048 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="pull" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907075 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="pull" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907097 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="util" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907105 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="util" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907129 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="extract" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907138 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="extract" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.907263 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="70983796-bd01-4b7c-ae23-0642eb25d6c1" containerName="extract" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.913410 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.917852 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.918406 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.918480 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-qrh69\"" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.926657 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n9qc6"] Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.970964 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxxtx\" (UniqueName: \"kubernetes.io/projected/dd99edf9-9148-4965-adf4-ea02eab1a032-kube-api-access-rxxtx\") pod \"obo-prometheus-operator-86648f486b-n9qc6\" (UID: \"dd99edf9-9148-4965-adf4-ea02eab1a032\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.978876 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb"] Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.988019 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.993993 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj"] Dec 08 19:03:10 crc kubenswrapper[4998]: I1208 19:03:10.998013 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.000182 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.001188 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.001299 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-jsqxp\"" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.042593 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.072372 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxxtx\" (UniqueName: \"kubernetes.io/projected/dd99edf9-9148-4965-adf4-ea02eab1a032-kube-api-access-rxxtx\") pod \"obo-prometheus-operator-86648f486b-n9qc6\" (UID: \"dd99edf9-9148-4965-adf4-ea02eab1a032\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.072705 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.072796 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.110631 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxxtx\" (UniqueName: \"kubernetes.io/projected/dd99edf9-9148-4965-adf4-ea02eab1a032-kube-api-access-rxxtx\") pod \"obo-prometheus-operator-86648f486b-n9qc6\" (UID: \"dd99edf9-9148-4965-adf4-ea02eab1a032\") " pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.141890 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mxqp8"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.146999 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.158197 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.159301 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-2fs77\"" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177505 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d124acb-30a4-423e-bd29-09759cb1697c-observability-operator-tls\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177582 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177622 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177666 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5942g\" (UniqueName: \"kubernetes.io/projected/2d124acb-30a4-423e-bd29-09759cb1697c-kube-api-access-5942g\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177755 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.177834 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.190723 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.205905 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9b6f0e4a-17d2-4bfc-99da-0e5adb261671-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-vkmhj\" (UID: \"9b6f0e4a-17d2-4bfc-99da-0e5adb261671\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.212618 4998 generic.go:358] "Generic (PLEG): container finished" podID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerID="2e22419a429edc7d1e0eff617af0319acde5af6d5d8cb70e6860fa047120e62f" exitCode=0 Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.212889 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" event={"ID":"3ea85968-8256-409e-8aa2-e7671f116fd2","Type":"ContainerDied","Data":"2e22419a429edc7d1e0eff617af0319acde5af6d5d8cb70e6860fa047120e62f"} Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.245548 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mxqp8"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.247887 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.280584 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d124acb-30a4-423e-bd29-09759cb1697c-observability-operator-tls\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.280644 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.280674 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.280755 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5942g\" (UniqueName: \"kubernetes.io/projected/2d124acb-30a4-423e-bd29-09759cb1697c-kube-api-access-5942g\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.286281 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/2d124acb-30a4-423e-bd29-09759cb1697c-observability-operator-tls\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.296870 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.297150 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c84c97ab-d2db-4251-8ad9-c24f654dcb30-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-795958f77c-rlphb\" (UID: \"c84c97ab-d2db-4251-8ad9-c24f654dcb30\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.309872 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.313316 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5942g\" (UniqueName: \"kubernetes.io/projected/2d124acb-30a4-423e-bd29-09759cb1697c-kube-api-access-5942g\") pod \"observability-operator-78c97476f4-mxqp8\" (UID: \"2d124acb-30a4-423e-bd29-09759cb1697c\") " pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.330980 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.349127 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-s6rfw"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.361111 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.365544 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-s6rfw"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.366943 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-457cl\"" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.381798 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6fq7t\" (UniqueName: \"kubernetes.io/projected/588fa0eb-0431-45df-b85a-bbbdbbd5828d-kube-api-access-6fq7t\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.382136 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/588fa0eb-0431-45df-b85a-bbbdbbd5828d-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.463975 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.491551 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6fq7t\" (UniqueName: \"kubernetes.io/projected/588fa0eb-0431-45df-b85a-bbbdbbd5828d-kube-api-access-6fq7t\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.492656 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/588fa0eb-0431-45df-b85a-bbbdbbd5828d-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.493873 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/588fa0eb-0431-45df-b85a-bbbdbbd5828d-openshift-service-ca\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.537107 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6fq7t\" (UniqueName: \"kubernetes.io/projected/588fa0eb-0431-45df-b85a-bbbdbbd5828d-kube-api-access-6fq7t\") pod \"perses-operator-68bdb49cbf-s6rfw\" (UID: \"588fa0eb-0431-45df-b85a-bbbdbbd5828d\") " pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.694268 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.856012 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb"] Dec 08 19:03:11 crc kubenswrapper[4998]: I1208 19:03:11.984082 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-86648f486b-n9qc6"] Dec 08 19:03:11 crc kubenswrapper[4998]: W1208 19:03:11.997756 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd99edf9_9148_4965_adf4_ea02eab1a032.slice/crio-92f258977f835a0ea70301bab69739b950f68e0faee0f8a373d09f5c2a1d5b46 WatchSource:0}: Error finding container 92f258977f835a0ea70301bab69739b950f68e0faee0f8a373d09f5c2a1d5b46: Status 404 returned error can't find the container with id 92f258977f835a0ea70301bab69739b950f68e0faee0f8a373d09f5c2a1d5b46 Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.035767 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.053720 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-942zv" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.159328 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-78c97476f4-mxqp8"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.175027 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.231918 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" event={"ID":"9b6f0e4a-17d2-4bfc-99da-0e5adb261671","Type":"ContainerStarted","Data":"2637695e60db557eb33f4a6ad843383a8ddcc979b7ef711ff839c170980590d4"} Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.234187 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" event={"ID":"2d124acb-30a4-423e-bd29-09759cb1697c","Type":"ContainerStarted","Data":"ebe15bef94d45ce6e915e079af38ca5fa395aca3408cefd48b6da5c9f2cc0e9e"} Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.237472 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" event={"ID":"dd99edf9-9148-4965-adf4-ea02eab1a032","Type":"ContainerStarted","Data":"92f258977f835a0ea70301bab69739b950f68e0faee0f8a373d09f5c2a1d5b46"} Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.246537 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" event={"ID":"c84c97ab-d2db-4251-8ad9-c24f654dcb30","Type":"ContainerStarted","Data":"2d8718eb520f88f729e027b0b1ca55aa489c85b8258d0fbd440b48c7d7eb0a13"} Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.318766 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-68bdb49cbf-s6rfw"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.515901 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.639160 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle\") pod \"3ea85968-8256-409e-8aa2-e7671f116fd2\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.639231 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pwrkv\" (UniqueName: \"kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv\") pod \"3ea85968-8256-409e-8aa2-e7671f116fd2\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.639271 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util\") pod \"3ea85968-8256-409e-8aa2-e7671f116fd2\" (UID: \"3ea85968-8256-409e-8aa2-e7671f116fd2\") " Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.641100 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle" (OuterVolumeSpecName: "bundle") pod "3ea85968-8256-409e-8aa2-e7671f116fd2" (UID: "3ea85968-8256-409e-8aa2-e7671f116fd2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.645149 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv" (OuterVolumeSpecName: "kube-api-access-pwrkv") pod "3ea85968-8256-409e-8aa2-e7671f116fd2" (UID: "3ea85968-8256-409e-8aa2-e7671f116fd2"). InnerVolumeSpecName "kube-api-access-pwrkv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.648097 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util" (OuterVolumeSpecName: "util") pod "3ea85968-8256-409e-8aa2-e7671f116fd2" (UID: "3ea85968-8256-409e-8aa2-e7671f116fd2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.648932 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.648957 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pwrkv\" (UniqueName: \"kubernetes.io/projected/3ea85968-8256-409e-8aa2-e7671f116fd2-kube-api-access-pwrkv\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.648966 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3ea85968-8256-409e-8aa2-e7671f116fd2-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726228 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-ddc5f5c99-k2khd"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726813 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="pull" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726830 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="pull" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726844 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="util" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726850 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="util" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726867 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="extract" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726875 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="extract" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.726993 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ea85968-8256-409e-8aa2-e7671f116fd2" containerName="extract" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.753849 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-ddc5f5c99-k2khd"] Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.753995 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.759646 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.759914 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.760345 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-rrzzt\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.760802 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.852843 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-apiservice-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.852898 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdfwj\" (UniqueName: \"kubernetes.io/projected/2003c92d-da08-40c9-9b2d-82bbe1dbb591-kube-api-access-qdfwj\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.852982 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-webhook-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.967466 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-apiservice-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.967517 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdfwj\" (UniqueName: \"kubernetes.io/projected/2003c92d-da08-40c9-9b2d-82bbe1dbb591-kube-api-access-qdfwj\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.967568 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-webhook-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.985213 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-webhook-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.986529 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/2003c92d-da08-40c9-9b2d-82bbe1dbb591-apiservice-cert\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:12 crc kubenswrapper[4998]: I1208 19:03:12.993374 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdfwj\" (UniqueName: \"kubernetes.io/projected/2003c92d-da08-40c9-9b2d-82bbe1dbb591-kube-api-access-qdfwj\") pod \"elastic-operator-ddc5f5c99-k2khd\" (UID: \"2003c92d-da08-40c9-9b2d-82bbe1dbb591\") " pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.078022 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.353361 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" event={"ID":"3ea85968-8256-409e-8aa2-e7671f116fd2","Type":"ContainerDied","Data":"3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e"} Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.353709 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c3e06f13d173261e6679d5381ee2caa88cf1a65276b05f72ace0161df9e040e" Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.353837 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a77lg6" Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.409559 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" event={"ID":"588fa0eb-0431-45df-b85a-bbbdbbd5828d","Type":"ContainerStarted","Data":"94fc40db76d0ee160b3fb2cdc6ba147a169e8e7974706b77e0d34d05fbaa31c3"} Dec 08 19:03:13 crc kubenswrapper[4998]: I1208 19:03:13.684558 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-ddc5f5c99-k2khd"] Dec 08 19:03:14 crc kubenswrapper[4998]: I1208 19:03:14.385148 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" event={"ID":"2003c92d-da08-40c9-9b2d-82bbe1dbb591","Type":"ContainerStarted","Data":"e314f951bfb90a86ab0ac3eb06a7942f5f1cfc9229b6b69b23c626afd992c46a"} Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.376135 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74"] Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.409966 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74"] Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.410181 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.412552 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.412867 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.412600 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-psxf4\"" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.576268 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rls25\" (UniqueName: \"kubernetes.io/projected/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-kube-api-access-rls25\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.576319 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.677886 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rls25\" (UniqueName: \"kubernetes.io/projected/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-kube-api-access-rls25\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.677939 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.678540 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.704395 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rls25\" (UniqueName: \"kubernetes.io/projected/8f1133ec-a124-41fc-99f7-4eea97ba4d8f-kube-api-access-rls25\") pod \"cert-manager-operator-controller-manager-64c74584c4-frk74\" (UID: \"8f1133ec-a124-41fc-99f7-4eea97ba4d8f\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:25 crc kubenswrapper[4998]: I1208 19:03:25.737197 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.709327 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" event={"ID":"9b6f0e4a-17d2-4bfc-99da-0e5adb261671","Type":"ContainerStarted","Data":"d703aeb8c69f642aa9d714cb0834c5bec1dffda29e7ea4f22aca4682a21d2e6f"} Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.709762 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74"] Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.736300 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" event={"ID":"588fa0eb-0431-45df-b85a-bbbdbbd5828d","Type":"ContainerStarted","Data":"177f517b76bf6e688a776262de52319f387d1fee1a2af650780d997ec54d4e4c"} Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.737041 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.752299 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-vkmhj" podStartSLOduration=2.480007293 podStartE2EDuration="24.752282493s" podCreationTimestamp="2025-12-08 19:03:10 +0000 UTC" firstStartedPulling="2025-12-08 19:03:12.063221184 +0000 UTC m=+695.711263874" lastFinishedPulling="2025-12-08 19:03:34.335496384 +0000 UTC m=+717.983539074" observedRunningTime="2025-12-08 19:03:34.747905485 +0000 UTC m=+718.395948175" watchObservedRunningTime="2025-12-08 19:03:34.752282493 +0000 UTC m=+718.400325183" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.752342 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" event={"ID":"2d124acb-30a4-423e-bd29-09759cb1697c","Type":"ContainerStarted","Data":"be0e8c7f543b77788d0d1d7e2517be64313662e3fa26d35003d17d2f33a96ef6"} Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.752653 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.754070 4998 patch_prober.go:28] interesting pod/observability-operator-78c97476f4-mxqp8 container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" start-of-body= Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.754129 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" podUID="2d124acb-30a4-423e-bd29-09759cb1697c" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.46:8081/healthz\": dial tcp 10.217.0.46:8081: connect: connection refused" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.765241 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" event={"ID":"c84c97ab-d2db-4251-8ad9-c24f654dcb30","Type":"ContainerStarted","Data":"b7bdf1dd931c910816483d949233ca63ecfb4de5b96c9d9e1b58d750793376a1"} Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.786808 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" podStartSLOduration=1.841089637 podStartE2EDuration="23.786791336s" podCreationTimestamp="2025-12-08 19:03:11 +0000 UTC" firstStartedPulling="2025-12-08 19:03:12.338598773 +0000 UTC m=+695.986641463" lastFinishedPulling="2025-12-08 19:03:34.284300472 +0000 UTC m=+717.932343162" observedRunningTime="2025-12-08 19:03:34.784063313 +0000 UTC m=+718.432106003" watchObservedRunningTime="2025-12-08 19:03:34.786791336 +0000 UTC m=+718.434834016" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.823246 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-795958f77c-rlphb" podStartSLOduration=2.405336698 podStartE2EDuration="24.823226571s" podCreationTimestamp="2025-12-08 19:03:10 +0000 UTC" firstStartedPulling="2025-12-08 19:03:11.898633199 +0000 UTC m=+695.546675889" lastFinishedPulling="2025-12-08 19:03:34.316523072 +0000 UTC m=+717.964565762" observedRunningTime="2025-12-08 19:03:34.821366851 +0000 UTC m=+718.469409541" watchObservedRunningTime="2025-12-08 19:03:34.823226571 +0000 UTC m=+718.471269261" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.875092 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" podStartSLOduration=2.434178152 podStartE2EDuration="22.875072563s" podCreationTimestamp="2025-12-08 19:03:12 +0000 UTC" firstStartedPulling="2025-12-08 19:03:13.725891927 +0000 UTC m=+697.373934607" lastFinishedPulling="2025-12-08 19:03:34.166786328 +0000 UTC m=+717.814829018" observedRunningTime="2025-12-08 19:03:34.867732634 +0000 UTC m=+718.515775324" watchObservedRunningTime="2025-12-08 19:03:34.875072563 +0000 UTC m=+718.523115253" Dec 08 19:03:34 crc kubenswrapper[4998]: I1208 19:03:34.933377 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" podStartSLOduration=1.730810873 podStartE2EDuration="23.933362029s" podCreationTimestamp="2025-12-08 19:03:11 +0000 UTC" firstStartedPulling="2025-12-08 19:03:12.195500918 +0000 UTC m=+695.843543608" lastFinishedPulling="2025-12-08 19:03:34.398052074 +0000 UTC m=+718.046094764" observedRunningTime="2025-12-08 19:03:34.931503648 +0000 UTC m=+718.579546338" watchObservedRunningTime="2025-12-08 19:03:34.933362029 +0000 UTC m=+718.581404719" Dec 08 19:03:35 crc kubenswrapper[4998]: I1208 19:03:35.780329 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" event={"ID":"dd99edf9-9148-4965-adf4-ea02eab1a032","Type":"ContainerStarted","Data":"f46d6f2029440c83ee41b2f9e8595aa39eee0e60cd6c9cd2a770379975e1cf34"} Dec 08 19:03:35 crc kubenswrapper[4998]: I1208 19:03:35.786728 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" event={"ID":"8f1133ec-a124-41fc-99f7-4eea97ba4d8f","Type":"ContainerStarted","Data":"36e0ae93a97519d35b265f7a804909b3a591cdba3c09b5d87953b5beaaeb8655"} Dec 08 19:03:35 crc kubenswrapper[4998]: I1208 19:03:35.791025 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-ddc5f5c99-k2khd" event={"ID":"2003c92d-da08-40c9-9b2d-82bbe1dbb591","Type":"ContainerStarted","Data":"2ed81583456cf149085fb3fa928c0c3ce1e899028fc98aae313cbf8070c3c352"} Dec 08 19:03:35 crc kubenswrapper[4998]: I1208 19:03:35.806325 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-86648f486b-n9qc6" podStartSLOduration=3.452281005 podStartE2EDuration="25.806305954s" podCreationTimestamp="2025-12-08 19:03:10 +0000 UTC" firstStartedPulling="2025-12-08 19:03:12.00163279 +0000 UTC m=+695.649675480" lastFinishedPulling="2025-12-08 19:03:34.355657739 +0000 UTC m=+718.003700429" observedRunningTime="2025-12-08 19:03:35.803441877 +0000 UTC m=+719.451484567" watchObservedRunningTime="2025-12-08 19:03:35.806305954 +0000 UTC m=+719.454348644" Dec 08 19:03:35 crc kubenswrapper[4998]: I1208 19:03:35.823100 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-78c97476f4-mxqp8" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.250399 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.734731 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.734908 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.752473 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.752917 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.753167 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.753330 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-9lr67\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.753399 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.753587 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.753747 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.760045 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.760124 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816084 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816135 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816160 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816178 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816195 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816225 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816248 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816264 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816295 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816315 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816343 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816360 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816385 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816428 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.816444 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917578 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917631 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917655 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917726 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917749 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917773 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917812 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917888 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917930 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917952 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.917985 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.918040 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.918056 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.918075 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.918093 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.919734 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.921843 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.922331 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.922540 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.932265 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.933065 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.933100 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.933811 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.935497 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.936752 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.943343 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.959222 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.959491 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.959542 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:36 crc kubenswrapper[4998]: I1208 19:03:36.959561 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f7182adc-f1f1-4403-88d6-0ccbb7211fb5-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f7182adc-f1f1-4403-88d6-0ccbb7211fb5\") " pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:37 crc kubenswrapper[4998]: I1208 19:03:37.051313 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:03:37 crc kubenswrapper[4998]: I1208 19:03:37.242009 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerName="registry" containerID="cri-o://3d7cced41589c4354afa44deef026a82a700bf099405573648db6887f4f37e31" gracePeriod=30 Dec 08 19:03:37 crc kubenswrapper[4998]: I1208 19:03:37.808931 4998 generic.go:358] "Generic (PLEG): container finished" podID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerID="3d7cced41589c4354afa44deef026a82a700bf099405573648db6887f4f37e31" exitCode=0 Dec 08 19:03:37 crc kubenswrapper[4998]: I1208 19:03:37.809183 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" event={"ID":"e1550f97-e782-4bbe-b3a8-3df18c8f4041","Type":"ContainerDied","Data":"3d7cced41589c4354afa44deef026a82a700bf099405573648db6887f4f37e31"} Dec 08 19:03:37 crc kubenswrapper[4998]: I1208 19:03:37.913899 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:03:38 crc kubenswrapper[4998]: I1208 19:03:38.818562 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f7182adc-f1f1-4403-88d6-0ccbb7211fb5","Type":"ContainerStarted","Data":"da1d2a03bdc69e64eafac1561fe0a3106ca6ddbc0b693f9234583019a4b92379"} Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.120514 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123315 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123385 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c62q4\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123426 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123484 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123536 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123562 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123624 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.123939 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\" (UID: \"e1550f97-e782-4bbe-b3a8-3df18c8f4041\") " Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.125724 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.126190 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.136247 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.154076 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4" (OuterVolumeSpecName: "kube-api-access-c62q4") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "kube-api-access-c62q4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.155257 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.156790 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.161105 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.187640 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "e1550f97-e782-4bbe-b3a8-3df18c8f4041" (UID: "e1550f97-e782-4bbe-b3a8-3df18c8f4041"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226413 4998 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/e1550f97-e782-4bbe-b3a8-3df18c8f4041-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226446 4998 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-trusted-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226456 4998 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-certificates\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226464 4998 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-registry-tls\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226472 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c62q4\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-kube-api-access-c62q4\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226480 4998 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/e1550f97-e782-4bbe-b3a8-3df18c8f4041-bound-sa-token\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:40 crc kubenswrapper[4998]: I1208 19:03:40.226488 4998 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/e1550f97-e782-4bbe-b3a8-3df18c8f4041-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.027935 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" event={"ID":"8f1133ec-a124-41fc-99f7-4eea97ba4d8f","Type":"ContainerStarted","Data":"420b1a87ff5e21ee5c50eeaa87fa8735e386f161c56b6ad432ec114d76ed7ff5"} Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.031840 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" event={"ID":"e1550f97-e782-4bbe-b3a8-3df18c8f4041","Type":"ContainerDied","Data":"2fc7c2894647f616852f6394c6f42ffe990e043633c58b21502e0d172492101e"} Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.031896 4998 scope.go:117] "RemoveContainer" containerID="3d7cced41589c4354afa44deef026a82a700bf099405573648db6887f4f37e31" Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.032064 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.064913 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-frk74" podStartSLOduration=10.616453223 podStartE2EDuration="16.064891327s" podCreationTimestamp="2025-12-08 19:03:25 +0000 UTC" firstStartedPulling="2025-12-08 19:03:34.743158537 +0000 UTC m=+718.391201227" lastFinishedPulling="2025-12-08 19:03:40.191596641 +0000 UTC m=+723.839639331" observedRunningTime="2025-12-08 19:03:41.062336928 +0000 UTC m=+724.710379648" watchObservedRunningTime="2025-12-08 19:03:41.064891327 +0000 UTC m=+724.712934017" Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.082876 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.090946 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-kj9vm"] Dec 08 19:03:41 crc kubenswrapper[4998]: I1208 19:03:41.375661 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" path="/var/lib/kubelet/pods/e1550f97-e782-4bbe-b3a8-3df18c8f4041/volumes" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.809613 4998 patch_prober.go:28] interesting pod/image-registry-66587d64c8-kj9vm container/registry namespace/openshift-image-registry: Readiness probe status=failure output="Get \"https://10.217.0.28:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.810289 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-image-registry/image-registry-66587d64c8-kj9vm" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerName="registry" probeResult="failure" output="Get \"https://10.217.0.28:5000/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.958881 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r"] Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.959905 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerName="registry" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.961385 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerName="registry" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.961602 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="e1550f97-e782-4bbe-b3a8-3df18c8f4041" containerName="registry" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.969698 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.973497 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r"] Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.976030 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.976564 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-zscf7\"" Dec 08 19:03:44 crc kubenswrapper[4998]: I1208 19:03:44.982373 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.014816 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.014924 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9vgm\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-kube-api-access-v9vgm\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.122231 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.122344 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v9vgm\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-kube-api-access-v9vgm\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.145950 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v9vgm\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-kube-api-access-v9vgm\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.147316 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/68007594-e5e3-4ba9-81f2-b3731ec33516-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-d2h7r\" (UID: \"68007594-e5e3-4ba9-81f2-b3731ec33516\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:45 crc kubenswrapper[4998]: I1208 19:03:45.292347 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:03:46 crc kubenswrapper[4998]: I1208 19:03:46.138182 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r"] Dec 08 19:03:46 crc kubenswrapper[4998]: W1208 19:03:46.168844 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod68007594_e5e3_4ba9_81f2_b3731ec33516.slice/crio-a91bac7460033de1ce4ff9207a203bc5c158d929eb1fc29df4a4d9a860d19b8e WatchSource:0}: Error finding container a91bac7460033de1ce4ff9207a203bc5c158d929eb1fc29df4a4d9a860d19b8e: Status 404 returned error can't find the container with id a91bac7460033de1ce4ff9207a203bc5c158d929eb1fc29df4a4d9a860d19b8e Dec 08 19:03:46 crc kubenswrapper[4998]: I1208 19:03:46.804625 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-68bdb49cbf-s6rfw" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.095879 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" event={"ID":"68007594-e5e3-4ba9-81f2-b3731ec33516","Type":"ContainerStarted","Data":"a91bac7460033de1ce4ff9207a203bc5c158d929eb1fc29df4a4d9a860d19b8e"} Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.715015 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs"] Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.729025 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.731642 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-kx7hr\"" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.757489 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2vrh\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-kube-api-access-m2vrh\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.757805 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.761334 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs"] Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.859107 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m2vrh\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-kube-api-access-m2vrh\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.859156 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.905918 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2vrh\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-kube-api-access-m2vrh\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:47 crc kubenswrapper[4998]: I1208 19:03:47.913124 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ed407145-db7d-4430-b108-44d8f994a7a3-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-p2crs\" (UID: \"ed407145-db7d-4430-b108-44d8f994a7a3\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:48 crc kubenswrapper[4998]: I1208 19:03:48.048488 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" Dec 08 19:03:48 crc kubenswrapper[4998]: I1208 19:03:48.768045 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs"] Dec 08 19:03:49 crc kubenswrapper[4998]: I1208 19:03:49.144225 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" event={"ID":"ed407145-db7d-4430-b108-44d8f994a7a3","Type":"ContainerStarted","Data":"03370c58a2f09f138bd2e6771b9972649087412a2e453fb3e9a359c7fc3bc129"} Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.468148 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-9qqvd"] Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.776405 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-9qqvd"] Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.776651 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.780177 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-68x9p\"" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.803465 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-bound-sa-token\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.803574 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzpqr\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-kube-api-access-dzpqr\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.905343 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dzpqr\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-kube-api-access-dzpqr\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.905483 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-bound-sa-token\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.931085 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dzpqr\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-kube-api-access-dzpqr\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:55 crc kubenswrapper[4998]: I1208 19:03:55.944479 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9841fc50-6ca9-40ba-adec-a2d20e342f73-bound-sa-token\") pod \"cert-manager-858d87f86b-9qqvd\" (UID: \"9841fc50-6ca9-40ba-adec-a2d20e342f73\") " pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:03:56 crc kubenswrapper[4998]: I1208 19:03:56.102483 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-9qqvd" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.675011 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.683571 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.687464 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-c5swx\"" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.698136 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.746515 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zxh4\" (UniqueName: \"kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4\") pod \"infrawatch-operators-lxvps\" (UID: \"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe\") " pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.848389 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6zxh4\" (UniqueName: \"kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4\") pod \"infrawatch-operators-lxvps\" (UID: \"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe\") " pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.877943 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zxh4\" (UniqueName: \"kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4\") pod \"infrawatch-operators-lxvps\" (UID: \"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe\") " pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:05 crc kubenswrapper[4998]: I1208 19:04:05.952011 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-9qqvd"] Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.007961 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.216189 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:06 crc kubenswrapper[4998]: W1208 19:04:06.223290 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d401f7d_8233_49a6_9f32_fdeb2bf0d0fe.slice/crio-c50f161bbfdff8b11a7e5758c4ee69d2a063d63b7cde4f2e2014375c51696610 WatchSource:0}: Error finding container c50f161bbfdff8b11a7e5758c4ee69d2a063d63b7cde4f2e2014375c51696610: Status 404 returned error can't find the container with id c50f161bbfdff8b11a7e5758c4ee69d2a063d63b7cde4f2e2014375c51696610 Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.261281 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" event={"ID":"68007594-e5e3-4ba9-81f2-b3731ec33516","Type":"ContainerStarted","Data":"f0b370e4dfae7a7b90f74c9605bdcae00c4b0fe4974ae4c03141ac6ef104952c"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.261474 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.263094 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lxvps" event={"ID":"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe","Type":"ContainerStarted","Data":"c50f161bbfdff8b11a7e5758c4ee69d2a063d63b7cde4f2e2014375c51696610"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.264677 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-9qqvd" event={"ID":"9841fc50-6ca9-40ba-adec-a2d20e342f73","Type":"ContainerStarted","Data":"3015f60bdad3c8bf16b4dc46ab136222204e8ce1e4ab41285a912438d08ee9be"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.264759 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-9qqvd" event={"ID":"9841fc50-6ca9-40ba-adec-a2d20e342f73","Type":"ContainerStarted","Data":"360903458eb740f1909fae1552c8421b3b2ae2cd9f45be931ac249c0d583925d"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.266402 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f7182adc-f1f1-4403-88d6-0ccbb7211fb5","Type":"ContainerStarted","Data":"81edd2eeadf1a1fb4bd06049ceb2ffced8998667bea649e844ad1381181aa977"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.268106 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" event={"ID":"ed407145-db7d-4430-b108-44d8f994a7a3","Type":"ContainerStarted","Data":"632f28583a388313e3ee13d37c32eeb0336ea42b0d8baa2e1159a6016b988ae6"} Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.289330 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" podStartSLOduration=2.730723976 podStartE2EDuration="22.289310176s" podCreationTimestamp="2025-12-08 19:03:44 +0000 UTC" firstStartedPulling="2025-12-08 19:03:46.182877238 +0000 UTC m=+729.830919928" lastFinishedPulling="2025-12-08 19:04:05.741463438 +0000 UTC m=+749.389506128" observedRunningTime="2025-12-08 19:04:06.283600482 +0000 UTC m=+749.931643172" watchObservedRunningTime="2025-12-08 19:04:06.289310176 +0000 UTC m=+749.937352856" Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.311327 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-p2crs" podStartSLOduration=2.438648435 podStartE2EDuration="19.311309901s" podCreationTimestamp="2025-12-08 19:03:47 +0000 UTC" firstStartedPulling="2025-12-08 19:03:48.787812002 +0000 UTC m=+732.435854692" lastFinishedPulling="2025-12-08 19:04:05.660473468 +0000 UTC m=+749.308516158" observedRunningTime="2025-12-08 19:04:06.308154466 +0000 UTC m=+749.956197146" watchObservedRunningTime="2025-12-08 19:04:06.311309901 +0000 UTC m=+749.959352591" Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.406098 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-9qqvd" podStartSLOduration=11.406079733 podStartE2EDuration="11.406079733s" podCreationTimestamp="2025-12-08 19:03:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:04:06.398706493 +0000 UTC m=+750.046749183" watchObservedRunningTime="2025-12-08 19:04:06.406079733 +0000 UTC m=+750.054122423" Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.807110 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:04:06 crc kubenswrapper[4998]: I1208 19:04:06.843149 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Dec 08 19:04:08 crc kubenswrapper[4998]: I1208 19:04:08.324161 4998 generic.go:358] "Generic (PLEG): container finished" podID="f7182adc-f1f1-4403-88d6-0ccbb7211fb5" containerID="81edd2eeadf1a1fb4bd06049ceb2ffced8998667bea649e844ad1381181aa977" exitCode=0 Dec 08 19:04:08 crc kubenswrapper[4998]: I1208 19:04:08.324312 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f7182adc-f1f1-4403-88d6-0ccbb7211fb5","Type":"ContainerDied","Data":"81edd2eeadf1a1fb4bd06049ceb2ffced8998667bea649e844ad1381181aa977"} Dec 08 19:04:09 crc kubenswrapper[4998]: I1208 19:04:09.407200 4998 generic.go:358] "Generic (PLEG): container finished" podID="f7182adc-f1f1-4403-88d6-0ccbb7211fb5" containerID="65b66647eb22903609c6eeecfc43cf34de2c5deb7b11fa8d477c5f20f6092633" exitCode=0 Dec 08 19:04:09 crc kubenswrapper[4998]: I1208 19:04:09.407259 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f7182adc-f1f1-4403-88d6-0ccbb7211fb5","Type":"ContainerDied","Data":"65b66647eb22903609c6eeecfc43cf34de2c5deb7b11fa8d477c5f20f6092633"} Dec 08 19:04:09 crc kubenswrapper[4998]: I1208 19:04:09.409204 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lxvps" event={"ID":"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe","Type":"ContainerStarted","Data":"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883"} Dec 08 19:04:09 crc kubenswrapper[4998]: I1208 19:04:09.465931 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-lxvps" podStartSLOduration=1.946060169 podStartE2EDuration="4.46591115s" podCreationTimestamp="2025-12-08 19:04:05 +0000 UTC" firstStartedPulling="2025-12-08 19:04:06.225322057 +0000 UTC m=+749.873364747" lastFinishedPulling="2025-12-08 19:04:08.745173038 +0000 UTC m=+752.393215728" observedRunningTime="2025-12-08 19:04:09.464319748 +0000 UTC m=+753.112362448" watchObservedRunningTime="2025-12-08 19:04:09.46591115 +0000 UTC m=+753.113953840" Dec 08 19:04:10 crc kubenswrapper[4998]: I1208 19:04:10.419107 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f7182adc-f1f1-4403-88d6-0ccbb7211fb5","Type":"ContainerStarted","Data":"2779e21c1c66e43cb0138b8a1fc79433842c8d4e6f3b0c368c21ccd191bc9351"} Dec 08 19:04:10 crc kubenswrapper[4998]: I1208 19:04:10.419977 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.265774 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=7.475800403 podStartE2EDuration="35.265749861s" podCreationTimestamp="2025-12-08 19:03:36 +0000 UTC" firstStartedPulling="2025-12-08 19:03:38.012584311 +0000 UTC m=+721.660627011" lastFinishedPulling="2025-12-08 19:04:05.802533779 +0000 UTC m=+749.450576469" observedRunningTime="2025-12-08 19:04:10.451212184 +0000 UTC m=+754.099254884" watchObservedRunningTime="2025-12-08 19:04:11.265749861 +0000 UTC m=+754.913792551" Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.269580 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.425044 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-lxvps" podUID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" containerName="registry-server" containerID="cri-o://4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883" gracePeriod=2 Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.826672 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.840044 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6zxh4\" (UniqueName: \"kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4\") pod \"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe\" (UID: \"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe\") " Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.847398 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4" (OuterVolumeSpecName: "kube-api-access-6zxh4") pod "4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" (UID: "4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe"). InnerVolumeSpecName "kube-api-access-6zxh4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:11 crc kubenswrapper[4998]: I1208 19:04:11.941056 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6zxh4\" (UniqueName: \"kubernetes.io/projected/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe-kube-api-access-6zxh4\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.073475 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-zdm7p"] Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.074314 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" containerName="registry-server" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.074348 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" containerName="registry-server" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.074491 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" containerName="registry-server" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.088630 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.091123 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-zdm7p"] Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.245152 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p29k\" (UniqueName: \"kubernetes.io/projected/cc8370e4-5509-427b-a305-64f5ab17d5bb-kube-api-access-7p29k\") pod \"infrawatch-operators-zdm7p\" (UID: \"cc8370e4-5509-427b-a305-64f5ab17d5bb\") " pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.314496 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-d2h7r" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.346420 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7p29k\" (UniqueName: \"kubernetes.io/projected/cc8370e4-5509-427b-a305-64f5ab17d5bb-kube-api-access-7p29k\") pod \"infrawatch-operators-zdm7p\" (UID: \"cc8370e4-5509-427b-a305-64f5ab17d5bb\") " pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.374654 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p29k\" (UniqueName: \"kubernetes.io/projected/cc8370e4-5509-427b-a305-64f5ab17d5bb-kube-api-access-7p29k\") pod \"infrawatch-operators-zdm7p\" (UID: \"cc8370e4-5509-427b-a305-64f5ab17d5bb\") " pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.402553 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.433875 4998 generic.go:358] "Generic (PLEG): container finished" podID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" containerID="4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883" exitCode=0 Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.433968 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lxvps" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.434551 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lxvps" event={"ID":"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe","Type":"ContainerDied","Data":"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883"} Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.434606 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lxvps" event={"ID":"4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe","Type":"ContainerDied","Data":"c50f161bbfdff8b11a7e5758c4ee69d2a063d63b7cde4f2e2014375c51696610"} Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.434628 4998 scope.go:117] "RemoveContainer" containerID="4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.478345 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.482252 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-lxvps"] Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.484137 4998 scope.go:117] "RemoveContainer" containerID="4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883" Dec 08 19:04:12 crc kubenswrapper[4998]: E1208 19:04:12.484623 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883\": container with ID starting with 4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883 not found: ID does not exist" containerID="4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.484662 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883"} err="failed to get container status \"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883\": rpc error: code = NotFound desc = could not find container \"4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883\": container with ID starting with 4ca65770798ab9107abba073e77d0a50e3e96cf40cc6c60a3d8160f0612ba883 not found: ID does not exist" Dec 08 19:04:12 crc kubenswrapper[4998]: I1208 19:04:12.970176 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-zdm7p"] Dec 08 19:04:13 crc kubenswrapper[4998]: I1208 19:04:13.388919 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe" path="/var/lib/kubelet/pods/4d401f7d-8233-49a6-9f32-fdeb2bf0d0fe/volumes" Dec 08 19:04:13 crc kubenswrapper[4998]: I1208 19:04:13.460302 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zdm7p" event={"ID":"cc8370e4-5509-427b-a305-64f5ab17d5bb","Type":"ContainerStarted","Data":"3cb361a38b9ff6b5fcbec3c9eaabcd2d5841f73a1bb83f48896fff8208fa2481"} Dec 08 19:04:14 crc kubenswrapper[4998]: I1208 19:04:14.477539 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-zdm7p" event={"ID":"cc8370e4-5509-427b-a305-64f5ab17d5bb","Type":"ContainerStarted","Data":"84a5436023e159caf21e6ba23d1a5bbe6bb0f12aab0dac786b7468f81f1d47b2"} Dec 08 19:04:14 crc kubenswrapper[4998]: I1208 19:04:14.500884 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-zdm7p" podStartSLOduration=2.00557331 podStartE2EDuration="2.500858278s" podCreationTimestamp="2025-12-08 19:04:12 +0000 UTC" firstStartedPulling="2025-12-08 19:04:12.977591213 +0000 UTC m=+756.625633913" lastFinishedPulling="2025-12-08 19:04:13.472876191 +0000 UTC m=+757.120918881" observedRunningTime="2025-12-08 19:04:14.493782196 +0000 UTC m=+758.141824906" watchObservedRunningTime="2025-12-08 19:04:14.500858278 +0000 UTC m=+758.148900968" Dec 08 19:04:22 crc kubenswrapper[4998]: I1208 19:04:22.402933 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:22 crc kubenswrapper[4998]: I1208 19:04:22.403171 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:22 crc kubenswrapper[4998]: I1208 19:04:22.439329 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:22 crc kubenswrapper[4998]: I1208 19:04:22.706457 4998 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f7182adc-f1f1-4403-88d6-0ccbb7211fb5" containerName="elasticsearch" probeResult="failure" output=< Dec 08 19:04:22 crc kubenswrapper[4998]: {"timestamp": "2025-12-08T19:04:22+00:00", "message": "readiness probe failed", "curl_rc": "7"} Dec 08 19:04:22 crc kubenswrapper[4998]: > Dec 08 19:04:22 crc kubenswrapper[4998]: I1208 19:04:22.716082 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-zdm7p" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.196776 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl"] Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.208283 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.212331 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl"] Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.345963 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.346383 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tfz\" (UniqueName: \"kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.346526 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.447966 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.448022 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f6tfz\" (UniqueName: \"kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.448061 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.448505 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.449384 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.474566 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6tfz\" (UniqueName: \"kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz\") pod \"f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.546995 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.836067 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl"] Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.939220 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b"] Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.956575 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:27 crc kubenswrapper[4998]: I1208 19:04:27.958073 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b"] Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.001494 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.001769 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.001832 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc7cz\" (UniqueName: \"kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.103223 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.103298 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.103350 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mc7cz\" (UniqueName: \"kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.103878 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.104019 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.123039 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc7cz\" (UniqueName: \"kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz\") pod \"36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.249222 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.270215 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.548374 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b"] Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.725161 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp"] Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.750671 4998 generic.go:358] "Generic (PLEG): container finished" podID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerID="869bbbc929b351305e04aefa09ef17f91ea14fd297510102028ebefa3aa41096" exitCode=0 Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.886942 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp"] Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.886978 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" event={"ID":"6b9b308d-576a-47c5-9357-d86b535f510d","Type":"ContainerStarted","Data":"956229920fec96a2000e1fcebf47d6b1b3563d35bff6033703d81e93d7a4bdb5"} Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.887001 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" event={"ID":"dd4ae176-decc-426f-b98d-e8a6d2b33bb1","Type":"ContainerDied","Data":"869bbbc929b351305e04aefa09ef17f91ea14fd297510102028ebefa3aa41096"} Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.887016 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" event={"ID":"dd4ae176-decc-426f-b98d-e8a6d2b33bb1","Type":"ContainerStarted","Data":"da9ffa3ffdb31c0655daf7f72069f4e1346110e9d83631c58ea8290112b15d7e"} Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.888146 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.895610 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.912046 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.912165 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpsx\" (UniqueName: \"kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:28 crc kubenswrapper[4998]: I1208 19:04:28.912263 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.012777 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.012951 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpsx\" (UniqueName: \"kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.013047 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.013568 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.013913 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.035649 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpsx\" (UniqueName: \"kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.215745 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.613606 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp"] Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.758612 4998 generic.go:358] "Generic (PLEG): container finished" podID="6b9b308d-576a-47c5-9357-d86b535f510d" containerID="54826c537cdff140ccaa08d2a18811904fa747b653c0a9b6a2cbaa55971c2341" exitCode=0 Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.758751 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" event={"ID":"6b9b308d-576a-47c5-9357-d86b535f510d","Type":"ContainerDied","Data":"54826c537cdff140ccaa08d2a18811904fa747b653c0a9b6a2cbaa55971c2341"} Dec 08 19:04:29 crc kubenswrapper[4998]: I1208 19:04:29.760148 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" event={"ID":"a4b01ebc-74dc-4d56-ac9e-eef420302e84","Type":"ContainerStarted","Data":"5a52a2fa10830ea071e1c03972f063be15cf77e6283fef3a0ec3f44d5a796aa6"} Dec 08 19:04:30 crc kubenswrapper[4998]: I1208 19:04:30.767256 4998 generic.go:358] "Generic (PLEG): container finished" podID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerID="55dabdf5068be143cf8979b5aab4e5c07914e49fa5b8e41b7490025ce772d6d0" exitCode=0 Dec 08 19:04:30 crc kubenswrapper[4998]: I1208 19:04:30.767466 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" event={"ID":"a4b01ebc-74dc-4d56-ac9e-eef420302e84","Type":"ContainerDied","Data":"55dabdf5068be143cf8979b5aab4e5c07914e49fa5b8e41b7490025ce772d6d0"} Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.233121 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.233197 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.777920 4998 generic.go:358] "Generic (PLEG): container finished" podID="6b9b308d-576a-47c5-9357-d86b535f510d" containerID="56d797585c411d7185a12cdef0369f10b61ffaae6193939ffaf805eb89adae83" exitCode=0 Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.778012 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" event={"ID":"6b9b308d-576a-47c5-9357-d86b535f510d","Type":"ContainerDied","Data":"56d797585c411d7185a12cdef0369f10b61ffaae6193939ffaf805eb89adae83"} Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.781554 4998 generic.go:358] "Generic (PLEG): container finished" podID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerID="c4e4a39ebe040def580c6084bf4f9dbc35bc78f6bb932f704d144cdb121c1ad9" exitCode=0 Dec 08 19:04:31 crc kubenswrapper[4998]: I1208 19:04:31.781719 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" event={"ID":"dd4ae176-decc-426f-b98d-e8a6d2b33bb1","Type":"ContainerDied","Data":"c4e4a39ebe040def580c6084bf4f9dbc35bc78f6bb932f704d144cdb121c1ad9"} Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.802876 4998 generic.go:358] "Generic (PLEG): container finished" podID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerID="b6bb68f72bc385a210a2932bdce8108fb17404c1c4bd52b664c97f7e5414fc92" exitCode=0 Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.802972 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" event={"ID":"dd4ae176-decc-426f-b98d-e8a6d2b33bb1","Type":"ContainerDied","Data":"b6bb68f72bc385a210a2932bdce8108fb17404c1c4bd52b664c97f7e5414fc92"} Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.805867 4998 generic.go:358] "Generic (PLEG): container finished" podID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerID="c4279dd688dfb6640d99d51285b8403bfcca91d5ab8504e433fd94ccf6d8e817" exitCode=0 Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.806173 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" event={"ID":"a4b01ebc-74dc-4d56-ac9e-eef420302e84","Type":"ContainerDied","Data":"c4279dd688dfb6640d99d51285b8403bfcca91d5ab8504e433fd94ccf6d8e817"} Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.810483 4998 generic.go:358] "Generic (PLEG): container finished" podID="6b9b308d-576a-47c5-9357-d86b535f510d" containerID="83451417e5f0bdeef3103d15011e191f7429e060a912fa87b31f56fb1e00538d" exitCode=0 Dec 08 19:04:32 crc kubenswrapper[4998]: I1208 19:04:32.810642 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" event={"ID":"6b9b308d-576a-47c5-9357-d86b535f510d","Type":"ContainerDied","Data":"83451417e5f0bdeef3103d15011e191f7429e060a912fa87b31f56fb1e00538d"} Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.471112 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.514022 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.514407 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.576834 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.576942 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.576980 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwtsn\" (UniqueName: \"kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.678477 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.678561 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwtsn\" (UniqueName: \"kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.678650 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.679302 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.679502 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.703940 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwtsn\" (UniqueName: \"kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn\") pod \"redhat-operators-spkr9\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.819607 4998 generic.go:358] "Generic (PLEG): container finished" podID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerID="2720cf1a1157cba1934f45c6f84f5488c86536c0cdb6d0dd23336a868d40be6f" exitCode=0 Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.819752 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" event={"ID":"a4b01ebc-74dc-4d56-ac9e-eef420302e84","Type":"ContainerDied","Data":"2720cf1a1157cba1934f45c6f84f5488c86536c0cdb6d0dd23336a868d40be6f"} Dec 08 19:04:33 crc kubenswrapper[4998]: I1208 19:04:33.838230 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:34 crc kubenswrapper[4998]: I1208 19:04:34.698930 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.090165 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.090726 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" event={"ID":"6b9b308d-576a-47c5-9357-d86b535f510d","Type":"ContainerDied","Data":"956229920fec96a2000e1fcebf47d6b1b3563d35bff6033703d81e93d7a4bdb5"} Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.090760 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956229920fec96a2000e1fcebf47d6b1b3563d35bff6033703d81e93d7a4bdb5" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.097320 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" event={"ID":"dd4ae176-decc-426f-b98d-e8a6d2b33bb1","Type":"ContainerDied","Data":"da9ffa3ffdb31c0655daf7f72069f4e1346110e9d83631c58ea8290112b15d7e"} Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.097362 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da9ffa3ffdb31c0655daf7f72069f4e1346110e9d83631c58ea8290112b15d7e" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.097756 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.104761 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerStarted","Data":"a54a4a4a09e2b58bb876d0b3f5cfbfbe25a609f4d243f3529fc934733fd2e83b"} Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174244 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6tfz\" (UniqueName: \"kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz\") pod \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174323 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mc7cz\" (UniqueName: \"kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz\") pod \"6b9b308d-576a-47c5-9357-d86b535f510d\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174423 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle\") pod \"6b9b308d-576a-47c5-9357-d86b535f510d\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174521 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util\") pod \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174586 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util\") pod \"6b9b308d-576a-47c5-9357-d86b535f510d\" (UID: \"6b9b308d-576a-47c5-9357-d86b535f510d\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.174621 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle\") pod \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\" (UID: \"dd4ae176-decc-426f-b98d-e8a6d2b33bb1\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.175984 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle" (OuterVolumeSpecName: "bundle") pod "dd4ae176-decc-426f-b98d-e8a6d2b33bb1" (UID: "dd4ae176-decc-426f-b98d-e8a6d2b33bb1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.178179 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle" (OuterVolumeSpecName: "bundle") pod "6b9b308d-576a-47c5-9357-d86b535f510d" (UID: "6b9b308d-576a-47c5-9357-d86b535f510d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.183620 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz" (OuterVolumeSpecName: "kube-api-access-f6tfz") pod "dd4ae176-decc-426f-b98d-e8a6d2b33bb1" (UID: "dd4ae176-decc-426f-b98d-e8a6d2b33bb1"). InnerVolumeSpecName "kube-api-access-f6tfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.191560 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz" (OuterVolumeSpecName: "kube-api-access-mc7cz") pod "6b9b308d-576a-47c5-9357-d86b535f510d" (UID: "6b9b308d-576a-47c5-9357-d86b535f510d"). InnerVolumeSpecName "kube-api-access-mc7cz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.194935 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util" (OuterVolumeSpecName: "util") pod "6b9b308d-576a-47c5-9357-d86b535f510d" (UID: "6b9b308d-576a-47c5-9357-d86b535f510d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.221599 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util" (OuterVolumeSpecName: "util") pod "dd4ae176-decc-426f-b98d-e8a6d2b33bb1" (UID: "dd4ae176-decc-426f-b98d-e8a6d2b33bb1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276561 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276623 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276635 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6b9b308d-576a-47c5-9357-d86b535f510d-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276645 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276660 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-f6tfz\" (UniqueName: \"kubernetes.io/projected/dd4ae176-decc-426f-b98d-e8a6d2b33bb1-kube-api-access-f6tfz\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.276673 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mc7cz\" (UniqueName: \"kubernetes.io/projected/6b9b308d-576a-47c5-9357-d86b535f510d-kube-api-access-mc7cz\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.413404 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.477738 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zpsx\" (UniqueName: \"kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx\") pod \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.477874 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle\") pod \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.477903 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util\") pod \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\" (UID: \"a4b01ebc-74dc-4d56-ac9e-eef420302e84\") " Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.479157 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle" (OuterVolumeSpecName: "bundle") pod "a4b01ebc-74dc-4d56-ac9e-eef420302e84" (UID: "a4b01ebc-74dc-4d56-ac9e-eef420302e84"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.500236 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx" (OuterVolumeSpecName: "kube-api-access-9zpsx") pod "a4b01ebc-74dc-4d56-ac9e-eef420302e84" (UID: "a4b01ebc-74dc-4d56-ac9e-eef420302e84"). InnerVolumeSpecName "kube-api-access-9zpsx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.508430 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util" (OuterVolumeSpecName: "util") pod "a4b01ebc-74dc-4d56-ac9e-eef420302e84" (UID: "a4b01ebc-74dc-4d56-ac9e-eef420302e84"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.579745 4998 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-bundle\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.580018 4998 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a4b01ebc-74dc-4d56-ac9e-eef420302e84-util\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:35 crc kubenswrapper[4998]: I1208 19:04:35.580086 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9zpsx\" (UniqueName: \"kubernetes.io/projected/a4b01ebc-74dc-4d56-ac9e-eef420302e84-kube-api-access-9zpsx\") on node \"crc\" DevicePath \"\"" Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.113565 4998 generic.go:358] "Generic (PLEG): container finished" podID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerID="0241399d0deff0eb362f8afc96ef2eb2ef210eaff04145abefabd8d2d9e2769a" exitCode=0 Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.114388 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerDied","Data":"0241399d0deff0eb362f8afc96ef2eb2ef210eaff04145abefabd8d2d9e2769a"} Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.116853 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/f308c3282bd783e18badba37dad473f984d0c04be601135745fecb7682c5xnl" Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.117817 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.118883 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/36ffb4ab4bfe83a910ab52ec1870308fea799225a9f1157962b08e8113mxf4b" Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.121790 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8f5wflp" event={"ID":"a4b01ebc-74dc-4d56-ac9e-eef420302e84","Type":"ContainerDied","Data":"5a52a2fa10830ea071e1c03972f063be15cf77e6283fef3a0ec3f44d5a796aa6"} Dec 08 19:04:36 crc kubenswrapper[4998]: I1208 19:04:36.121839 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a52a2fa10830ea071e1c03972f063be15cf77e6283fef3a0ec3f44d5a796aa6" Dec 08 19:04:37 crc kubenswrapper[4998]: I1208 19:04:37.125814 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerStarted","Data":"47d35a75e47a9e4d514df8c14df0747dbc19f540ed8ebff0e8349eb81ed42066"} Dec 08 19:04:39 crc kubenswrapper[4998]: I1208 19:04:39.167293 4998 generic.go:358] "Generic (PLEG): container finished" podID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerID="47d35a75e47a9e4d514df8c14df0747dbc19f540ed8ebff0e8349eb81ed42066" exitCode=0 Dec 08 19:04:39 crc kubenswrapper[4998]: I1208 19:04:39.167358 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerDied","Data":"47d35a75e47a9e4d514df8c14df0747dbc19f540ed8ebff0e8349eb81ed42066"} Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.012581 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl"] Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013655 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013681 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013714 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013723 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013740 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013747 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013758 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013765 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013777 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013783 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013797 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013803 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="pull" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013843 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013850 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013860 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013867 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013876 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.013882 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="util" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.014146 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="a4b01ebc-74dc-4d56-ac9e-eef420302e84" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.014159 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b9b308d-576a-47c5-9357-d86b535f510d" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.014175 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="dd4ae176-decc-426f-b98d-e8a6d2b33bb1" containerName="extract" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.167472 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl"] Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.167938 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.170376 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-v49vl\"" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.182918 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerStarted","Data":"b9d96bebce7d4c1e38985ad9868e6d20edc148cce9e4437537e8cca931a94ea0"} Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.206376 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-runner\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.206434 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gdg\" (UniqueName: \"kubernetes.io/projected/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-kube-api-access-f7gdg\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.242498 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-spkr9" podStartSLOduration=6.57035917 podStartE2EDuration="7.242474787s" podCreationTimestamp="2025-12-08 19:04:33 +0000 UTC" firstStartedPulling="2025-12-08 19:04:36.116587952 +0000 UTC m=+779.764630642" lastFinishedPulling="2025-12-08 19:04:36.788703569 +0000 UTC m=+780.436746259" observedRunningTime="2025-12-08 19:04:40.237477381 +0000 UTC m=+783.885520071" watchObservedRunningTime="2025-12-08 19:04:40.242474787 +0000 UTC m=+783.890517477" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.307800 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-runner\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.308146 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f7gdg\" (UniqueName: \"kubernetes.io/projected/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-kube-api-access-f7gdg\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.308347 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-runner\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.331063 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7gdg\" (UniqueName: \"kubernetes.io/projected/fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb-kube-api-access-f7gdg\") pod \"smart-gateway-operator-5cd794ff55-6gjgl\" (UID: \"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb\") " pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.483901 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" Dec 08 19:04:40 crc kubenswrapper[4998]: I1208 19:04:40.729261 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl"] Dec 08 19:04:40 crc kubenswrapper[4998]: W1208 19:04:40.738201 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc05bb69_ff4b_4ec3_a787_ba5a0a5dd8fb.slice/crio-3147f91725ca6dea8dac493c41d36b076ed11ded2205c5d6cd676b37f036effa WatchSource:0}: Error finding container 3147f91725ca6dea8dac493c41d36b076ed11ded2205c5d6cd676b37f036effa: Status 404 returned error can't find the container with id 3147f91725ca6dea8dac493c41d36b076ed11ded2205c5d6cd676b37f036effa Dec 08 19:04:41 crc kubenswrapper[4998]: I1208 19:04:41.189041 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" event={"ID":"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb","Type":"ContainerStarted","Data":"3147f91725ca6dea8dac493c41d36b076ed11ded2205c5d6cd676b37f036effa"} Dec 08 19:04:41 crc kubenswrapper[4998]: I1208 19:04:41.209793 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-f6fff"] Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.388975 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-f6fff"] Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.389196 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.392410 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-9kksx\"" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.434944 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2b283676-cef4-4db2-bbb9-fb3571c336dd-runner\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.435030 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f76jw\" (UniqueName: \"kubernetes.io/projected/2b283676-cef4-4db2-bbb9-fb3571c336dd-kube-api-access-f76jw\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.487055 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-dcz5s"] Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.525032 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-dcz5s"] Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.525333 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.536805 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2b283676-cef4-4db2-bbb9-fb3571c336dd-runner\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.536898 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/2b283676-cef4-4db2-bbb9-fb3571c336dd-runner\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.536912 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-f76jw\" (UniqueName: \"kubernetes.io/projected/2b283676-cef4-4db2-bbb9-fb3571c336dd-kube-api-access-f76jw\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.537077 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82flc\" (UniqueName: \"kubernetes.io/projected/d49b4c4f-da0d-4855-b607-6a4d651cf7d5-kube-api-access-82flc\") pod \"interconnect-operator-78b9bd8798-dcz5s\" (UID: \"d49b4c4f-da0d-4855-b607-6a4d651cf7d5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.542818 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-c297l\"" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.587963 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-f76jw\" (UniqueName: \"kubernetes.io/projected/2b283676-cef4-4db2-bbb9-fb3571c336dd-kube-api-access-f76jw\") pod \"service-telemetry-operator-79647f8775-f6fff\" (UID: \"2b283676-cef4-4db2-bbb9-fb3571c336dd\") " pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.638134 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82flc\" (UniqueName: \"kubernetes.io/projected/d49b4c4f-da0d-4855-b607-6a4d651cf7d5-kube-api-access-82flc\") pod \"interconnect-operator-78b9bd8798-dcz5s\" (UID: \"d49b4c4f-da0d-4855-b607-6a4d651cf7d5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.679184 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82flc\" (UniqueName: \"kubernetes.io/projected/d49b4c4f-da0d-4855-b607-6a4d651cf7d5-kube-api-access-82flc\") pod \"interconnect-operator-78b9bd8798-dcz5s\" (UID: \"d49b4c4f-da0d-4855-b607-6a4d651cf7d5\") " pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.709246 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" Dec 08 19:04:42 crc kubenswrapper[4998]: I1208 19:04:42.871664 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" Dec 08 19:04:43 crc kubenswrapper[4998]: I1208 19:04:43.294633 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-79647f8775-f6fff"] Dec 08 19:04:43 crc kubenswrapper[4998]: I1208 19:04:43.319678 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-dcz5s"] Dec 08 19:04:43 crc kubenswrapper[4998]: I1208 19:04:43.839380 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:43 crc kubenswrapper[4998]: I1208 19:04:43.839420 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:44 crc kubenswrapper[4998]: I1208 19:04:44.278136 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" event={"ID":"d49b4c4f-da0d-4855-b607-6a4d651cf7d5","Type":"ContainerStarted","Data":"2cadb7f5c9a905cab27dc5a2a3b6dc93bca15f6e4dde4920af295aa578c064c8"} Dec 08 19:04:44 crc kubenswrapper[4998]: I1208 19:04:44.281412 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" event={"ID":"2b283676-cef4-4db2-bbb9-fb3571c336dd","Type":"ContainerStarted","Data":"5bed9f1ae0351acc536620998a988ea66967546fda5b91aa6ad9da20e7ed5f99"} Dec 08 19:04:44 crc kubenswrapper[4998]: I1208 19:04:44.919775 4998 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-spkr9" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="registry-server" probeResult="failure" output=< Dec 08 19:04:44 crc kubenswrapper[4998]: timeout: failed to connect service ":50051" within 1s Dec 08 19:04:44 crc kubenswrapper[4998]: > Dec 08 19:04:53 crc kubenswrapper[4998]: I1208 19:04:53.897651 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:53 crc kubenswrapper[4998]: I1208 19:04:53.937474 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:04:57 crc kubenswrapper[4998]: I1208 19:04:57.081049 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:04:57 crc kubenswrapper[4998]: I1208 19:04:57.081751 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-spkr9" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="registry-server" containerID="cri-o://b9d96bebce7d4c1e38985ad9868e6d20edc148cce9e4437537e8cca931a94ea0" gracePeriod=2 Dec 08 19:04:57 crc kubenswrapper[4998]: I1208 19:04:57.454438 4998 generic.go:358] "Generic (PLEG): container finished" podID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerID="b9d96bebce7d4c1e38985ad9868e6d20edc148cce9e4437537e8cca931a94ea0" exitCode=0 Dec 08 19:04:57 crc kubenswrapper[4998]: I1208 19:04:57.454456 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerDied","Data":"b9d96bebce7d4c1e38985ad9868e6d20edc148cce9e4437537e8cca931a94ea0"} Dec 08 19:05:01 crc kubenswrapper[4998]: I1208 19:05:01.233763 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:05:01 crc kubenswrapper[4998]: I1208 19:05:01.234384 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.262052 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.416251 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content\") pod \"852cf75e-ca7b-4344-af84-ad1f711e298a\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.416411 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwtsn\" (UniqueName: \"kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn\") pod \"852cf75e-ca7b-4344-af84-ad1f711e298a\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.416457 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities\") pod \"852cf75e-ca7b-4344-af84-ad1f711e298a\" (UID: \"852cf75e-ca7b-4344-af84-ad1f711e298a\") " Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.417223 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities" (OuterVolumeSpecName: "utilities") pod "852cf75e-ca7b-4344-af84-ad1f711e298a" (UID: "852cf75e-ca7b-4344-af84-ad1f711e298a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.432819 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn" (OuterVolumeSpecName: "kube-api-access-xwtsn") pod "852cf75e-ca7b-4344-af84-ad1f711e298a" (UID: "852cf75e-ca7b-4344-af84-ad1f711e298a"). InnerVolumeSpecName "kube-api-access-xwtsn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.515136 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "852cf75e-ca7b-4344-af84-ad1f711e298a" (UID: "852cf75e-ca7b-4344-af84-ad1f711e298a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.518651 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.518681 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwtsn\" (UniqueName: \"kubernetes.io/projected/852cf75e-ca7b-4344-af84-ad1f711e298a-kube-api-access-xwtsn\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.518803 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/852cf75e-ca7b-4344-af84-ad1f711e298a-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.526168 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-spkr9" event={"ID":"852cf75e-ca7b-4344-af84-ad1f711e298a","Type":"ContainerDied","Data":"a54a4a4a09e2b58bb876d0b3f5cfbfbe25a609f4d243f3529fc934733fd2e83b"} Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.526248 4998 scope.go:117] "RemoveContainer" containerID="b9d96bebce7d4c1e38985ad9868e6d20edc148cce9e4437537e8cca931a94ea0" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.526446 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-spkr9" Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.560286 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:05:03 crc kubenswrapper[4998]: I1208 19:05:03.568015 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-spkr9"] Dec 08 19:05:05 crc kubenswrapper[4998]: I1208 19:05:05.373823 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" path="/var/lib/kubelet/pods/852cf75e-ca7b-4344-af84-ad1f711e298a/volumes" Dec 08 19:05:06 crc kubenswrapper[4998]: I1208 19:05:06.043622 4998 scope.go:117] "RemoveContainer" containerID="47d35a75e47a9e4d514df8c14df0747dbc19f540ed8ebff0e8349eb81ed42066" Dec 08 19:05:06 crc kubenswrapper[4998]: I1208 19:05:06.179427 4998 scope.go:117] "RemoveContainer" containerID="0241399d0deff0eb362f8afc96ef2eb2ef210eaff04145abefabd8d2d9e2769a" Dec 08 19:05:16 crc kubenswrapper[4998]: I1208 19:05:16.616549 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" event={"ID":"d49b4c4f-da0d-4855-b607-6a4d651cf7d5","Type":"ContainerStarted","Data":"35d1ff0de474aa97a8e23ba0d9f171d6bb56db6c6405ce0a0eb9d44979ba6c6f"} Dec 08 19:05:16 crc kubenswrapper[4998]: I1208 19:05:16.640213 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-dcz5s" podStartSLOduration=12.008582341 podStartE2EDuration="34.640190246s" podCreationTimestamp="2025-12-08 19:04:42 +0000 UTC" firstStartedPulling="2025-12-08 19:04:43.325760779 +0000 UTC m=+786.973803459" lastFinishedPulling="2025-12-08 19:05:05.957368674 +0000 UTC m=+809.605411364" observedRunningTime="2025-12-08 19:05:16.630474973 +0000 UTC m=+820.278517653" watchObservedRunningTime="2025-12-08 19:05:16.640190246 +0000 UTC m=+820.288232956" Dec 08 19:05:17 crc kubenswrapper[4998]: I1208 19:05:17.624736 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" event={"ID":"fc05bb69-ff4b-4ec3-a787-ba5a0a5dd8fb","Type":"ContainerStarted","Data":"8f8aab294fd9999cbfc0947bdedac3b05f98e0b70b9354cd29d593887dc04bb1"} Dec 08 19:05:17 crc kubenswrapper[4998]: I1208 19:05:17.627023 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" event={"ID":"2b283676-cef4-4db2-bbb9-fb3571c336dd","Type":"ContainerStarted","Data":"a089dd38b6f2ef2ee0c661f1983e289e836d36c2bfded42c5fb63cc896631333"} Dec 08 19:05:17 crc kubenswrapper[4998]: I1208 19:05:17.648032 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-5cd794ff55-6gjgl" podStartSLOduration=2.543336181 podStartE2EDuration="38.648012268s" podCreationTimestamp="2025-12-08 19:04:39 +0000 UTC" firstStartedPulling="2025-12-08 19:04:40.739165503 +0000 UTC m=+784.387208193" lastFinishedPulling="2025-12-08 19:05:16.84384159 +0000 UTC m=+820.491884280" observedRunningTime="2025-12-08 19:05:17.643146936 +0000 UTC m=+821.291189636" watchObservedRunningTime="2025-12-08 19:05:17.648012268 +0000 UTC m=+821.296054968" Dec 08 19:05:17 crc kubenswrapper[4998]: I1208 19:05:17.669070 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-79647f8775-f6fff" podStartSLOduration=2.986972993 podStartE2EDuration="36.669052547s" podCreationTimestamp="2025-12-08 19:04:41 +0000 UTC" firstStartedPulling="2025-12-08 19:04:43.296387385 +0000 UTC m=+786.944430075" lastFinishedPulling="2025-12-08 19:05:16.978466929 +0000 UTC m=+820.626509629" observedRunningTime="2025-12-08 19:05:17.665408648 +0000 UTC m=+821.313451338" watchObservedRunningTime="2025-12-08 19:05:17.669052547 +0000 UTC m=+821.317095237" Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.233037 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.233631 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.233711 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.234411 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.234478 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791" gracePeriod=600 Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.720864 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791" exitCode=0 Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.720920 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791"} Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.721445 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8"} Dec 08 19:05:31 crc kubenswrapper[4998]: I1208 19:05:31.721473 4998 scope.go:117] "RemoveContainer" containerID="c0744be29aaa95b47f57535508586b72c282f591a8c2a2c6ed250f260c5fd85a" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.210264 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211654 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="extract-utilities" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211705 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="extract-utilities" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211754 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="registry-server" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211764 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="registry-server" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211780 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="extract-content" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211789 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="extract-content" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.211922 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="852cf75e-ca7b-4344-af84-ad1f711e298a" containerName="registry-server" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.219176 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.223721 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-wlhnr\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.224000 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.224816 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.227264 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.227315 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.227600 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.235150 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.272621 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.272967 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.273035 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.273086 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5b778\" (UniqueName: \"kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.273248 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.273495 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.273578 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.300995 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.374925 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.374978 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.375024 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.375058 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.375116 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.375138 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.375168 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5b778\" (UniqueName: \"kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.376645 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.382504 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.383084 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.383801 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.384252 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.392959 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.398493 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5b778\" (UniqueName: \"kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778\") pod \"default-interconnect-55bf8d5cb-x5ckw\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.565478 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:05:38 crc kubenswrapper[4998]: I1208 19:05:38.984665 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:05:38 crc kubenswrapper[4998]: W1208 19:05:38.994732 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf6b0ee21_e7e0_4426_981d_33b1302b3b07.slice/crio-194598b14630446df0b99aec6ee92a1bb13a81ef9763d86f668809570f884b3b WatchSource:0}: Error finding container 194598b14630446df0b99aec6ee92a1bb13a81ef9763d86f668809570f884b3b: Status 404 returned error can't find the container with id 194598b14630446df0b99aec6ee92a1bb13a81ef9763d86f668809570f884b3b Dec 08 19:05:39 crc kubenswrapper[4998]: I1208 19:05:39.895887 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" event={"ID":"f6b0ee21-e7e0-4426-981d-33b1302b3b07","Type":"ContainerStarted","Data":"194598b14630446df0b99aec6ee92a1bb13a81ef9763d86f668809570f884b3b"} Dec 08 19:05:46 crc kubenswrapper[4998]: I1208 19:05:46.980804 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" event={"ID":"f6b0ee21-e7e0-4426-981d-33b1302b3b07","Type":"ContainerStarted","Data":"e4a7f0109680e0638f2ffaf599d6104619447b06c1f156f15421e2f73794aaf0"} Dec 08 19:05:47 crc kubenswrapper[4998]: I1208 19:05:47.016041 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" podStartSLOduration=1.596174531 podStartE2EDuration="9.016020429s" podCreationTimestamp="2025-12-08 19:05:38 +0000 UTC" firstStartedPulling="2025-12-08 19:05:38.99729644 +0000 UTC m=+842.645339140" lastFinishedPulling="2025-12-08 19:05:46.417142348 +0000 UTC m=+850.065185038" observedRunningTime="2025-12-08 19:05:47.006074849 +0000 UTC m=+850.654117549" watchObservedRunningTime="2025-12-08 19:05:47.016020429 +0000 UTC m=+850.664063129" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.223803 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.348039 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.348117 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.351073 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.351073 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.351216 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.351463 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.351803 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.352245 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.352448 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-2hbs5\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.352643 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.453916 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-web-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.453976 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454015 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454068 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454099 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0081442-ff22-420f-99cd-9dd725e37fe8-config-out\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454230 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454263 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454282 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mpswv\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-kube-api-access-mpswv\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454358 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.454399 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-tls-assets\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.555868 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.555928 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-tls-assets\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.555984 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-web-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556004 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556021 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556045 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556064 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0081442-ff22-420f-99cd-9dd725e37fe8-config-out\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556117 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556135 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.556153 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mpswv\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-kube-api-access-mpswv\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: E1208 19:05:51.557009 4998 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 19:05:51 crc kubenswrapper[4998]: E1208 19:05:51.557117 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls podName:c0081442-ff22-420f-99cd-9dd725e37fe8 nodeName:}" failed. No retries permitted until 2025-12-08 19:05:52.057086454 +0000 UTC m=+855.705129144 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "c0081442-ff22-420f-99cd-9dd725e37fe8") : secret "default-prometheus-proxy-tls" not found Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.561844 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.564813 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-web-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.564927 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.566491 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c0081442-ff22-420f-99cd-9dd725e37fe8-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.569024 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/c0081442-ff22-420f-99cd-9dd725e37fe8-config-out\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.569425 4998 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.569458 4998 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/e4f483e678fe59d1043245eed60345be206bdbe1196eaed20d342834a4545dbd/globalmount\"" pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.572991 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-tls-assets\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.573025 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-config\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.580134 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mpswv\" (UniqueName: \"kubernetes.io/projected/c0081442-ff22-420f-99cd-9dd725e37fe8-kube-api-access-mpswv\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:51 crc kubenswrapper[4998]: I1208 19:05:51.598427 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-714662b7-e4a4-473a-bfe0-d40a8260dadb\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:52 crc kubenswrapper[4998]: I1208 19:05:52.062770 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:52 crc kubenswrapper[4998]: E1208 19:05:52.062987 4998 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Dec 08 19:05:52 crc kubenswrapper[4998]: E1208 19:05:52.063148 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls podName:c0081442-ff22-420f-99cd-9dd725e37fe8 nodeName:}" failed. No retries permitted until 2025-12-08 19:05:53.063130775 +0000 UTC m=+856.711173465 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "c0081442-ff22-420f-99cd-9dd725e37fe8") : secret "default-prometheus-proxy-tls" not found Dec 08 19:05:53 crc kubenswrapper[4998]: I1208 19:05:53.076908 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:53 crc kubenswrapper[4998]: I1208 19:05:53.091339 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/c0081442-ff22-420f-99cd-9dd725e37fe8-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"c0081442-ff22-420f-99cd-9dd725e37fe8\") " pod="service-telemetry/prometheus-default-0" Dec 08 19:05:53 crc kubenswrapper[4998]: I1208 19:05:53.166314 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Dec 08 19:05:53 crc kubenswrapper[4998]: I1208 19:05:53.499620 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Dec 08 19:05:54 crc kubenswrapper[4998]: I1208 19:05:54.030290 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerStarted","Data":"1327b55582fab6093a9495f2f7a2bc0fcdd5b104f780cfa2aace437333010b5d"} Dec 08 19:05:58 crc kubenswrapper[4998]: I1208 19:05:58.062109 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerStarted","Data":"734b521179cbb0f4cde7f1abb7b71296a16978456ba6360db8f2ead4084ef06d"} Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.392902 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g"] Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.406487 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.407910 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g"] Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.514159 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq4fp\" (UniqueName: \"kubernetes.io/projected/dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1-kube-api-access-tq4fp\") pod \"default-snmp-webhook-6774d8dfbc-9dj6g\" (UID: \"dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.615437 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tq4fp\" (UniqueName: \"kubernetes.io/projected/dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1-kube-api-access-tq4fp\") pod \"default-snmp-webhook-6774d8dfbc-9dj6g\" (UID: \"dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.635585 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tq4fp\" (UniqueName: \"kubernetes.io/projected/dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1-kube-api-access-tq4fp\") pod \"default-snmp-webhook-6774d8dfbc-9dj6g\" (UID: \"dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1\") " pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" Dec 08 19:06:02 crc kubenswrapper[4998]: I1208 19:06:02.727554 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" Dec 08 19:06:03 crc kubenswrapper[4998]: I1208 19:06:03.015938 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g"] Dec 08 19:06:03 crc kubenswrapper[4998]: I1208 19:06:03.228110 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" event={"ID":"dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1","Type":"ContainerStarted","Data":"0728100465fb5ceac8838db4deba6ce028d914972dcb65851f23a871d4a0d4e8"} Dec 08 19:06:04 crc kubenswrapper[4998]: I1208 19:06:04.243267 4998 generic.go:358] "Generic (PLEG): container finished" podID="c0081442-ff22-420f-99cd-9dd725e37fe8" containerID="734b521179cbb0f4cde7f1abb7b71296a16978456ba6360db8f2ead4084ef06d" exitCode=0 Dec 08 19:06:04 crc kubenswrapper[4998]: I1208 19:06:04.243578 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerDied","Data":"734b521179cbb0f4cde7f1abb7b71296a16978456ba6360db8f2ead4084ef06d"} Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.683468 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.690334 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696065 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696395 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696546 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-zgh2g\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696664 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696802 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.696914 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.705624 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799145 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-web-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799190 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n72kq\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-kube-api-access-n72kq\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799225 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799285 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799328 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799352 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-volume\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799419 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799445 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.799459 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-out\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901089 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-web-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901137 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n72kq\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-kube-api-access-n72kq\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901174 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901194 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901213 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901240 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-volume\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901284 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901311 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.901388 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-out\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: E1208 19:06:05.902544 4998 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:05 crc kubenswrapper[4998]: E1208 19:06:05.902672 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls podName:f1c7871f-1319-4b9e-9f33-aaacc8ed7a13 nodeName:}" failed. No retries permitted until 2025-12-08 19:06:06.402656221 +0000 UTC m=+870.050698911 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f1c7871f-1319-4b9e-9f33-aaacc8ed7a13") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.922510 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-volume\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.925534 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-web-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.926108 4998 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.926129 4998 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/89b4cb7b91b53e34f57cff1cbef19b36ed2e57c7c6e48d9f2c028720cee938d6/globalmount\"" pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.927836 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.929982 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.933392 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n72kq\" (UniqueName: \"kubernetes.io/projected/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-kube-api-access-n72kq\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.943975 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-config-out\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.945129 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:05 crc kubenswrapper[4998]: I1208 19:06:05.984639 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-799eefe5-7fa3-49c3-a4f0-23325657df74\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:06 crc kubenswrapper[4998]: I1208 19:06:06.409086 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:06 crc kubenswrapper[4998]: E1208 19:06:06.409308 4998 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:06 crc kubenswrapper[4998]: E1208 19:06:06.409407 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls podName:f1c7871f-1319-4b9e-9f33-aaacc8ed7a13 nodeName:}" failed. No retries permitted until 2025-12-08 19:06:07.409387239 +0000 UTC m=+871.057429929 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f1c7871f-1319-4b9e-9f33-aaacc8ed7a13") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:07 crc kubenswrapper[4998]: I1208 19:06:07.483301 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:07 crc kubenswrapper[4998]: E1208 19:06:07.485116 4998 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:07 crc kubenswrapper[4998]: E1208 19:06:07.485182 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls podName:f1c7871f-1319-4b9e-9f33-aaacc8ed7a13 nodeName:}" failed. No retries permitted until 2025-12-08 19:06:09.485162463 +0000 UTC m=+873.133205153 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f1c7871f-1319-4b9e-9f33-aaacc8ed7a13") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:09 crc kubenswrapper[4998]: I1208 19:06:09.730604 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:09 crc kubenswrapper[4998]: E1208 19:06:09.730868 4998 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:09 crc kubenswrapper[4998]: E1208 19:06:09.731751 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls podName:f1c7871f-1319-4b9e-9f33-aaacc8ed7a13 nodeName:}" failed. No retries permitted until 2025-12-08 19:06:13.731730151 +0000 UTC m=+877.379772841 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f1c7871f-1319-4b9e-9f33-aaacc8ed7a13") : secret "default-alertmanager-proxy-tls" not found Dec 08 19:06:13 crc kubenswrapper[4998]: I1208 19:06:13.766062 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:13 crc kubenswrapper[4998]: I1208 19:06:13.777516 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f1c7871f-1319-4b9e-9f33-aaacc8ed7a13-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13\") " pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:13 crc kubenswrapper[4998]: I1208 19:06:13.824589 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Dec 08 19:06:23 crc kubenswrapper[4998]: I1208 19:06:23.309706 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Dec 08 19:06:23 crc kubenswrapper[4998]: W1208 19:06:23.316837 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf1c7871f_1319_4b9e_9f33_aaacc8ed7a13.slice/crio-53a568ca3b337bb5e46ccb8757537a29bd13300b098ca4b711d6f92af2f5b71a WatchSource:0}: Error finding container 53a568ca3b337bb5e46ccb8757537a29bd13300b098ca4b711d6f92af2f5b71a: Status 404 returned error can't find the container with id 53a568ca3b337bb5e46ccb8757537a29bd13300b098ca4b711d6f92af2f5b71a Dec 08 19:06:23 crc kubenswrapper[4998]: I1208 19:06:23.902042 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerStarted","Data":"7a26894666c4868f355400b42e675ec9cd732db7cb5e00eb30513349e5b247b8"} Dec 08 19:06:23 crc kubenswrapper[4998]: I1208 19:06:23.903394 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerStarted","Data":"53a568ca3b337bb5e46ccb8757537a29bd13300b098ca4b711d6f92af2f5b71a"} Dec 08 19:06:23 crc kubenswrapper[4998]: I1208 19:06:23.905301 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" event={"ID":"dd78e47c-5a1f-4fc3-bb0d-3fa0c45587d1","Type":"ContainerStarted","Data":"6564c54102bd1d5b974561dd7cb64183a54eec44bbcfc814b09544234b8e3a9a"} Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.473373 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6774d8dfbc-9dj6g" podStartSLOduration=2.646683915 podStartE2EDuration="22.473350521s" podCreationTimestamp="2025-12-08 19:06:02 +0000 UTC" firstStartedPulling="2025-12-08 19:06:03.022859216 +0000 UTC m=+866.670901906" lastFinishedPulling="2025-12-08 19:06:22.849525822 +0000 UTC m=+886.497568512" observedRunningTime="2025-12-08 19:06:23.931104972 +0000 UTC m=+887.579147682" watchObservedRunningTime="2025-12-08 19:06:24.473350521 +0000 UTC m=+888.121393211" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.476126 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w"] Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.500993 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w"] Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.501235 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.507067 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.507473 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.507744 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.507971 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-qgc2x\"" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.624466 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.624529 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rk6p\" (UniqueName: \"kubernetes.io/projected/6330b91f-03e0-49bb-a002-50fc85497f4c-kube-api-access-8rk6p\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.624612 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6330b91f-03e0-49bb-a002-50fc85497f4c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.624797 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6330b91f-03e0-49bb-a002-50fc85497f4c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.625013 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.726425 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6330b91f-03e0-49bb-a002-50fc85497f4c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.726489 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6330b91f-03e0-49bb-a002-50fc85497f4c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.726636 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.726755 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.726793 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8rk6p\" (UniqueName: \"kubernetes.io/projected/6330b91f-03e0-49bb-a002-50fc85497f4c-kube-api-access-8rk6p\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: E1208 19:06:24.726970 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.727004 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6330b91f-03e0-49bb-a002-50fc85497f4c-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: E1208 19:06:24.727092 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls podName:6330b91f-03e0-49bb-a002-50fc85497f4c nodeName:}" failed. No retries permitted until 2025-12-08 19:06:25.2270476 +0000 UTC m=+888.875090290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-64t4w" (UID: "6330b91f-03e0-49bb-a002-50fc85497f4c") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.727417 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6330b91f-03e0-49bb-a002-50fc85497f4c-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.737714 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:24 crc kubenswrapper[4998]: I1208 19:06:24.773439 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rk6p\" (UniqueName: \"kubernetes.io/projected/6330b91f-03e0-49bb-a002-50fc85497f4c-kube-api-access-8rk6p\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:25 crc kubenswrapper[4998]: I1208 19:06:25.233934 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:25 crc kubenswrapper[4998]: E1208 19:06:25.234192 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:06:25 crc kubenswrapper[4998]: E1208 19:06:25.234254 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls podName:6330b91f-03e0-49bb-a002-50fc85497f4c nodeName:}" failed. No retries permitted until 2025-12-08 19:06:26.234238701 +0000 UTC m=+889.882281391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-787645d794-64t4w" (UID: "6330b91f-03e0-49bb-a002-50fc85497f4c") : secret "default-cloud1-coll-meter-proxy-tls" not found Dec 08 19:06:25 crc kubenswrapper[4998]: I1208 19:06:25.919166 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerStarted","Data":"b4e185776ff955de419a512fdc237d001b338b36d94c83825072aba7c48cfe4b"} Dec 08 19:06:25 crc kubenswrapper[4998]: I1208 19:06:25.922597 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerStarted","Data":"465b748ec535e93e2525edd5dcc7509fabbb1ba1fc78d7e392e3c6eef9fb31dd"} Dec 08 19:06:26 crc kubenswrapper[4998]: I1208 19:06:26.251943 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:26 crc kubenswrapper[4998]: I1208 19:06:26.268332 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6330b91f-03e0-49bb-a002-50fc85497f4c-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-787645d794-64t4w\" (UID: \"6330b91f-03e0-49bb-a002-50fc85497f4c\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:26 crc kubenswrapper[4998]: I1208 19:06:26.320393 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" Dec 08 19:06:26 crc kubenswrapper[4998]: I1208 19:06:26.797382 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w"] Dec 08 19:06:26 crc kubenswrapper[4998]: I1208 19:06:26.947031 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"fe04b56587ee336373be3ba8cf718ef2db7edde86ecbe92996a7cedf2658d1ce"} Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.032320 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx"] Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.042140 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.049267 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx"] Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.051287 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.051671 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.081400 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.081483 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.081565 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52zp7\" (UniqueName: \"kubernetes.io/projected/86068142-4ee8-4b18-bc23-3200d3517caf-kube-api-access-52zp7\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.081596 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/86068142-4ee8-4b18-bc23-3200d3517caf-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.081611 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/86068142-4ee8-4b18-bc23-3200d3517caf-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.183437 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52zp7\" (UniqueName: \"kubernetes.io/projected/86068142-4ee8-4b18-bc23-3200d3517caf-kube-api-access-52zp7\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.183513 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/86068142-4ee8-4b18-bc23-3200d3517caf-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.183534 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/86068142-4ee8-4b18-bc23-3200d3517caf-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.183592 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.183625 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.185322 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/86068142-4ee8-4b18-bc23-3200d3517caf-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.186044 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/86068142-4ee8-4b18-bc23-3200d3517caf-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: E1208 19:06:28.186108 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:28 crc kubenswrapper[4998]: E1208 19:06:28.186168 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls podName:86068142-4ee8-4b18-bc23-3200d3517caf nodeName:}" failed. No retries permitted until 2025-12-08 19:06:28.686152696 +0000 UTC m=+892.334195386 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" (UID: "86068142-4ee8-4b18-bc23-3200d3517caf") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.192247 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.207177 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52zp7\" (UniqueName: \"kubernetes.io/projected/86068142-4ee8-4b18-bc23-3200d3517caf-kube-api-access-52zp7\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: I1208 19:06:28.691298 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:28 crc kubenswrapper[4998]: E1208 19:06:28.691522 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:28 crc kubenswrapper[4998]: E1208 19:06:28.691595 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls podName:86068142-4ee8-4b18-bc23-3200d3517caf nodeName:}" failed. No retries permitted until 2025-12-08 19:06:29.69157467 +0000 UTC m=+893.339617360 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" (UID: "86068142-4ee8-4b18-bc23-3200d3517caf") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:29 crc kubenswrapper[4998]: I1208 19:06:29.710156 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:29 crc kubenswrapper[4998]: E1208 19:06:29.710390 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:29 crc kubenswrapper[4998]: E1208 19:06:29.711249 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls podName:86068142-4ee8-4b18-bc23-3200d3517caf nodeName:}" failed. No retries permitted until 2025-12-08 19:06:31.711228075 +0000 UTC m=+895.359270765 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" (UID: "86068142-4ee8-4b18-bc23-3200d3517caf") : secret "default-cloud1-ceil-meter-proxy-tls" not found Dec 08 19:06:31 crc kubenswrapper[4998]: I1208 19:06:31.744114 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:31 crc kubenswrapper[4998]: I1208 19:06:31.752946 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/86068142-4ee8-4b18-bc23-3200d3517caf-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx\" (UID: \"86068142-4ee8-4b18-bc23-3200d3517caf\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:31.999905 4998 generic.go:358] "Generic (PLEG): container finished" podID="f1c7871f-1319-4b9e-9f33-aaacc8ed7a13" containerID="465b748ec535e93e2525edd5dcc7509fabbb1ba1fc78d7e392e3c6eef9fb31dd" exitCode=0 Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:31.999966 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerDied","Data":"465b748ec535e93e2525edd5dcc7509fabbb1ba1fc78d7e392e3c6eef9fb31dd"} Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.000241 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.758402 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr"] Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.839358 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr"] Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.839629 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.843623 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.844754 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.972710 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.972783 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.972889 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2sh8f\" (UniqueName: \"kubernetes.io/projected/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-kube-api-access-2sh8f\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.973247 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:32 crc kubenswrapper[4998]: I1208 19:06:32.973297 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.074294 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2sh8f\" (UniqueName: \"kubernetes.io/projected/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-kube-api-access-2sh8f\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.074356 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.074384 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.074415 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.074439 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.075154 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: E1208 19:06:33.075254 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:06:33 crc kubenswrapper[4998]: E1208 19:06:33.075341 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls podName:928ecc36-3576-4c8c-895c-4ba1ce39c5dc nodeName:}" failed. No retries permitted until 2025-12-08 19:06:33.575318682 +0000 UTC m=+897.223361362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" (UID: "928ecc36-3576-4c8c-895c-4ba1ce39c5dc") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.075504 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.098046 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2sh8f\" (UniqueName: \"kubernetes.io/projected/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-kube-api-access-2sh8f\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.114747 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: I1208 19:06:33.585153 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:33 crc kubenswrapper[4998]: E1208 19:06:33.585391 4998 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:06:33 crc kubenswrapper[4998]: E1208 19:06:33.585521 4998 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls podName:928ecc36-3576-4c8c-895c-4ba1ce39c5dc nodeName:}" failed. No retries permitted until 2025-12-08 19:06:34.585490494 +0000 UTC m=+898.233533184 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" (UID: "928ecc36-3576-4c8c-895c-4ba1ce39c5dc") : secret "default-cloud1-sens-meter-proxy-tls" not found Dec 08 19:06:34 crc kubenswrapper[4998]: I1208 19:06:34.603877 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:34 crc kubenswrapper[4998]: I1208 19:06:34.617077 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/928ecc36-3576-4c8c-895c-4ba1ce39c5dc-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr\" (UID: \"928ecc36-3576-4c8c-895c-4ba1ce39c5dc\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:34 crc kubenswrapper[4998]: I1208 19:06:34.670483 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.685268 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx"] Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.768849 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr"] Dec 08 19:06:37 crc kubenswrapper[4998]: W1208 19:06:37.789338 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod928ecc36_3576_4c8c_895c_4ba1ce39c5dc.slice/crio-432631009f378dbb560e3cc317a98d5ae41563274a2b40090498207a9f9f253f WatchSource:0}: Error finding container 432631009f378dbb560e3cc317a98d5ae41563274a2b40090498207a9f9f253f: Status 404 returned error can't find the container with id 432631009f378dbb560e3cc317a98d5ae41563274a2b40090498207a9f9f253f Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.857907 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.896336 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.902914 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:06:37 crc kubenswrapper[4998]: I1208 19:06:37.925056 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.088575 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"c0081442-ff22-420f-99cd-9dd725e37fe8","Type":"ContainerStarted","Data":"a0c6b87e68e0c8e30971d793a53bd0e9045f0bb5ff8ea65fcf424e39ff97b549"} Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.096780 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"9fa02f7172e6d00c324e1f9385dce4fe07e4fde15efc183f74308da9fffffd58"} Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.099523 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"432631009f378dbb560e3cc317a98d5ae41563274a2b40090498207a9f9f253f"} Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.101836 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"2028f936bcb8648835b70b8044cc6659d39363b6f2119237abd68f0ac2935cb0"} Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.123790 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.379771574 podStartE2EDuration="48.123767414s" podCreationTimestamp="2025-12-08 19:05:50 +0000 UTC" firstStartedPulling="2025-12-08 19:05:53.516171337 +0000 UTC m=+857.164214017" lastFinishedPulling="2025-12-08 19:06:37.260167167 +0000 UTC m=+900.908209857" observedRunningTime="2025-12-08 19:06:38.115065799 +0000 UTC m=+901.763108489" watchObservedRunningTime="2025-12-08 19:06:38.123767414 +0000 UTC m=+901.771810104" Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.166871 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.166931 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Dec 08 19:06:38 crc kubenswrapper[4998]: I1208 19:06:38.210456 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Dec 08 19:06:39 crc kubenswrapper[4998]: I1208 19:06:39.115360 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"5450fceed76bb1f83d29e4b7baba08df51753207bb678a1ef5a7a0367d63c09e"} Dec 08 19:06:39 crc kubenswrapper[4998]: I1208 19:06:39.118404 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"32d6af1744722a0cd598be7170f8f0c2da2482608f726f5cec6512e5004560ef"} Dec 08 19:06:39 crc kubenswrapper[4998]: I1208 19:06:39.160255 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Dec 08 19:06:42 crc kubenswrapper[4998]: I1208 19:06:42.154926 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerStarted","Data":"fc660d35231fbb93e0ef442866ad1a5c52a6b41857fbe0ca5b9709d3868d620f"} Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.301474 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c"] Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.338023 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c"] Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.338264 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.345161 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.345975 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.390174 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4c71e6da-9b75-4899-af79-4693ea2e0afe-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.390844 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4c71e6da-9b75-4899-af79-4693ea2e0afe-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.391123 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lgwr\" (UniqueName: \"kubernetes.io/projected/4c71e6da-9b75-4899-af79-4693ea2e0afe-kube-api-access-4lgwr\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.391241 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c71e6da-9b75-4899-af79-4693ea2e0afe-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.492342 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4lgwr\" (UniqueName: \"kubernetes.io/projected/4c71e6da-9b75-4899-af79-4693ea2e0afe-kube-api-access-4lgwr\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.492400 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c71e6da-9b75-4899-af79-4693ea2e0afe-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.492440 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4c71e6da-9b75-4899-af79-4693ea2e0afe-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.492505 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4c71e6da-9b75-4899-af79-4693ea2e0afe-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.493973 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4c71e6da-9b75-4899-af79-4693ea2e0afe-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.494551 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4c71e6da-9b75-4899-af79-4693ea2e0afe-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.501844 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/4c71e6da-9b75-4899-af79-4693ea2e0afe-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.514155 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lgwr\" (UniqueName: \"kubernetes.io/projected/4c71e6da-9b75-4899-af79-4693ea2e0afe-kube-api-access-4lgwr\") pod \"default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c\" (UID: \"4c71e6da-9b75-4899-af79-4693ea2e0afe\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:43 crc kubenswrapper[4998]: I1208 19:06:43.667836 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" Dec 08 19:06:44 crc kubenswrapper[4998]: I1208 19:06:44.164347 4998 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:06:44 crc kubenswrapper[4998]: I1208 19:06:44.182765 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerStarted","Data":"bd7921e2d860ec5b2aff2673097c3f4424267dadd8ac2112603930f3a24e6519"} Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.528010 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2"] Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.557625 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2"] Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.558283 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.565775 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.629127 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2891421-e1ac-441a-8703-dadd0ac37e8f-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.629192 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2891421-e1ac-441a-8703-dadd0ac37e8f-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.629408 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x6dw9\" (UniqueName: \"kubernetes.io/projected/c2891421-e1ac-441a-8703-dadd0ac37e8f-kube-api-access-x6dw9\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.629515 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c2891421-e1ac-441a-8703-dadd0ac37e8f-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.730919 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-x6dw9\" (UniqueName: \"kubernetes.io/projected/c2891421-e1ac-441a-8703-dadd0ac37e8f-kube-api-access-x6dw9\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.730983 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c2891421-e1ac-441a-8703-dadd0ac37e8f-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.731076 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2891421-e1ac-441a-8703-dadd0ac37e8f-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.731097 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2891421-e1ac-441a-8703-dadd0ac37e8f-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.731660 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/c2891421-e1ac-441a-8703-dadd0ac37e8f-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.732856 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/c2891421-e1ac-441a-8703-dadd0ac37e8f-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.757162 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/c2891421-e1ac-441a-8703-dadd0ac37e8f-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.759120 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-x6dw9\" (UniqueName: \"kubernetes.io/projected/c2891421-e1ac-441a-8703-dadd0ac37e8f-kube-api-access-x6dw9\") pod \"default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2\" (UID: \"c2891421-e1ac-441a-8703-dadd0ac37e8f\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:45 crc kubenswrapper[4998]: I1208 19:06:45.889215 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" Dec 08 19:06:49 crc kubenswrapper[4998]: I1208 19:06:49.710673 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c"] Dec 08 19:06:49 crc kubenswrapper[4998]: I1208 19:06:49.779733 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2"] Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.233201 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerStarted","Data":"8b59bb02f93fa8459e451376faba016eadcea7885febb4c2da89cd2ec92f2466"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.245276 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f1c7871f-1319-4b9e-9f33-aaacc8ed7a13","Type":"ContainerStarted","Data":"9b1502ea9cbb0a67e7a5d98f058e29d99a8db9c18a6e86b2df05b51e5da21a27"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.252159 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"daa62999e73fa8270e4fa435026101fc0bf595309643a1915d6f14840fb65d26"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.254975 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerStarted","Data":"81006c1de4a11f22ff8adffe6ac834fd5c88b9392afa2a9406489f3f3da159f3"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.257653 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"813bb81fbeb5f34114ba0f9a792f89c1a56fe3214fd3540a22d7bdf247882735"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.260743 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"2d39a6180a0356b70e1ad07daf4b67e088ca77b372791580337db9e3d732d906"} Dec 08 19:06:50 crc kubenswrapper[4998]: I1208 19:06:50.277744 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=28.762468925 podStartE2EDuration="46.277728012s" podCreationTimestamp="2025-12-08 19:06:04 +0000 UTC" firstStartedPulling="2025-12-08 19:06:32.00438381 +0000 UTC m=+895.652426500" lastFinishedPulling="2025-12-08 19:06:49.519642897 +0000 UTC m=+913.167685587" observedRunningTime="2025-12-08 19:06:50.273268282 +0000 UTC m=+913.921310972" watchObservedRunningTime="2025-12-08 19:06:50.277728012 +0000 UTC m=+913.925770702" Dec 08 19:06:51 crc kubenswrapper[4998]: I1208 19:06:51.300666 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerStarted","Data":"86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508"} Dec 08 19:06:51 crc kubenswrapper[4998]: I1208 19:06:51.314286 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerStarted","Data":"cf92673b8231ee1f3f3c8e163c9afcb7dc57eed131bac30a2fba8727c66cd9e8"} Dec 08 19:07:00 crc kubenswrapper[4998]: I1208 19:07:00.468096 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:07:00 crc kubenswrapper[4998]: I1208 19:07:00.469029 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" podUID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" containerName="default-interconnect" containerID="cri-o://e4a7f0109680e0638f2ffaf599d6104619447b06c1f156f15421e2f73794aaf0" gracePeriod=30 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.443751 4998 generic.go:358] "Generic (PLEG): container finished" podID="928ecc36-3576-4c8c-895c-4ba1ce39c5dc" containerID="813bb81fbeb5f34114ba0f9a792f89c1a56fe3214fd3540a22d7bdf247882735" exitCode=0 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.443751 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerDied","Data":"813bb81fbeb5f34114ba0f9a792f89c1a56fe3214fd3540a22d7bdf247882735"} Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.448192 4998 generic.go:358] "Generic (PLEG): container finished" podID="6330b91f-03e0-49bb-a002-50fc85497f4c" containerID="2d39a6180a0356b70e1ad07daf4b67e088ca77b372791580337db9e3d732d906" exitCode=0 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.448259 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerDied","Data":"2d39a6180a0356b70e1ad07daf4b67e088ca77b372791580337db9e3d732d906"} Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.450353 4998 generic.go:358] "Generic (PLEG): container finished" podID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" containerID="e4a7f0109680e0638f2ffaf599d6104619447b06c1f156f15421e2f73794aaf0" exitCode=0 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.450739 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" event={"ID":"f6b0ee21-e7e0-4426-981d-33b1302b3b07","Type":"ContainerDied","Data":"e4a7f0109680e0638f2ffaf599d6104619447b06c1f156f15421e2f73794aaf0"} Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.452243 4998 generic.go:358] "Generic (PLEG): container finished" podID="86068142-4ee8-4b18-bc23-3200d3517caf" containerID="daa62999e73fa8270e4fa435026101fc0bf595309643a1915d6f14840fb65d26" exitCode=0 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.452301 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerDied","Data":"daa62999e73fa8270e4fa435026101fc0bf595309643a1915d6f14840fb65d26"} Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.453865 4998 generic.go:358] "Generic (PLEG): container finished" podID="c2891421-e1ac-441a-8703-dadd0ac37e8f" containerID="cf92673b8231ee1f3f3c8e163c9afcb7dc57eed131bac30a2fba8727c66cd9e8" exitCode=0 Dec 08 19:07:01 crc kubenswrapper[4998]: I1208 19:07:01.453893 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerDied","Data":"cf92673b8231ee1f3f3c8e163c9afcb7dc57eed131bac30a2fba8727c66cd9e8"} Dec 08 19:07:01 crc kubenswrapper[4998]: E1208 19:07:01.564972 4998 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c71e6da_9b75_4899_af79_4693ea2e0afe.slice/crio-86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c71e6da_9b75_4899_af79_4693ea2e0afe.slice/crio-conmon-86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508.scope\": RecentStats: unable to find data in memory cache]" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.459532 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.472966 4998 generic.go:358] "Generic (PLEG): container finished" podID="4c71e6da-9b75-4899-af79-4693ea2e0afe" containerID="86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508" exitCode=0 Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.473201 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerDied","Data":"86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508"} Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.476317 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" event={"ID":"f6b0ee21-e7e0-4426-981d-33b1302b3b07","Type":"ContainerDied","Data":"194598b14630446df0b99aec6ee92a1bb13a81ef9763d86f668809570f884b3b"} Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.476469 4998 scope.go:117] "RemoveContainer" containerID="e4a7f0109680e0638f2ffaf599d6104619447b06c1f156f15421e2f73794aaf0" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.476470 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-x5ckw" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.503442 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q7gxx"] Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.504642 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" containerName="default-interconnect" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.504665 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" containerName="default-interconnect" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.504818 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" containerName="default-interconnect" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.513259 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.514683 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.514964 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.514998 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.515051 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.515127 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5b778\" (UniqueName: \"kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.515172 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.515193 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials\") pod \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\" (UID: \"f6b0ee21-e7e0-4426-981d-33b1302b3b07\") " Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.517172 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.549241 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.550095 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.552820 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.552900 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.556143 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.557264 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778" (OuterVolumeSpecName: "kube-api-access-5b778") pod "f6b0ee21-e7e0-4426-981d-33b1302b3b07" (UID: "f6b0ee21-e7e0-4426-981d-33b1302b3b07"). InnerVolumeSpecName "kube-api-access-5b778". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.609877 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q7gxx"] Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.619788 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.619999 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620107 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2v8rl\" (UniqueName: \"kubernetes.io/projected/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-kube-api-access-2v8rl\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620187 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620261 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620369 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-config\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620463 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-users\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620565 4998 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-users\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620631 4998 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620709 4998 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f6b0ee21-e7e0-4426-981d-33b1302b3b07-sasl-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620792 4998 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620856 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5b778\" (UniqueName: \"kubernetes.io/projected/f6b0ee21-e7e0-4426-981d-33b1302b3b07-kube-api-access-5b778\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620915 4998 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.620974 4998 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f6b0ee21-e7e0-4426-981d-33b1302b3b07-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.722968 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-users\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.723339 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.723456 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.723566 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2v8rl\" (UniqueName: \"kubernetes.io/projected/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-kube-api-access-2v8rl\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.723700 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.725460 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.726110 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-config\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.727279 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-config\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.729355 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-sasl-users\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.729564 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.730338 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.730984 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.737910 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.746060 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2v8rl\" (UniqueName: \"kubernetes.io/projected/9181a9fe-11a5-4a73-bbec-6a78a7ec27a8-kube-api-access-2v8rl\") pod \"default-interconnect-55bf8d5cb-q7gxx\" (UID: \"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.808122 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.813672 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-x5ckw"] Dec 08 19:07:02 crc kubenswrapper[4998]: I1208 19:07:02.861492 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.094233 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q7gxx"] Dec 08 19:07:03 crc kubenswrapper[4998]: W1208 19:07:03.103053 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9181a9fe_11a5_4a73_bbec_6a78a7ec27a8.slice/crio-adee2ac92f803788696678da3137141bf19eaaf689ecb5b4e5386188b44dbff9 WatchSource:0}: Error finding container adee2ac92f803788696678da3137141bf19eaaf689ecb5b4e5386188b44dbff9: Status 404 returned error can't find the container with id adee2ac92f803788696678da3137141bf19eaaf689ecb5b4e5386188b44dbff9 Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.376202 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b0ee21-e7e0-4426-981d-33b1302b3b07" path="/var/lib/kubelet/pods/f6b0ee21-e7e0-4426-981d-33b1302b3b07/volumes" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.489620 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"d1d18ff09f5073a05182b04c594eb6451ac425dfa12f7faadb753647940e6bb1"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.490373 4998 scope.go:117] "RemoveContainer" containerID="daa62999e73fa8270e4fa435026101fc0bf595309643a1915d6f14840fb65d26" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.493791 4998 scope.go:117] "RemoveContainer" containerID="cf92673b8231ee1f3f3c8e163c9afcb7dc57eed131bac30a2fba8727c66cd9e8" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.495236 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerStarted","Data":"e6aeb6bbf78cd718bba3521842eca23ea9ae5169ba696de3a0ea0fe98909ccb6"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.501095 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"d0493f38883042bd77e1c306a34cbc9fd65e77331c995b56a542c7e406bf9906"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.501598 4998 scope.go:117] "RemoveContainer" containerID="813bb81fbeb5f34114ba0f9a792f89c1a56fe3214fd3540a22d7bdf247882735" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.504243 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" event={"ID":"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8","Type":"ContainerStarted","Data":"b85d61bed5deb61ceae659fba9656fa13b971b3df3b7e2932b68570cdb1ee4cb"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.504286 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" event={"ID":"9181a9fe-11a5-4a73-bbec-6a78a7ec27a8","Type":"ContainerStarted","Data":"adee2ac92f803788696678da3137141bf19eaaf689ecb5b4e5386188b44dbff9"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.509536 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"821927906a17eace7cff5b8e8e5c8d6879744f931fb8ed45b95df00dafa56de9"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.510318 4998 scope.go:117] "RemoveContainer" containerID="2d39a6180a0356b70e1ad07daf4b67e088ca77b372791580337db9e3d732d906" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.532039 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerStarted","Data":"527cffa9bcb5e3cb41f233342587980d96ac19c45c8858b26d54de9702edbc1a"} Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.532559 4998 scope.go:117] "RemoveContainer" containerID="86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.589440 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-q7gxx" podStartSLOduration=3.589414678 podStartE2EDuration="3.589414678s" podCreationTimestamp="2025-12-08 19:07:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-12-08 19:07:03.584979048 +0000 UTC m=+927.233021778" watchObservedRunningTime="2025-12-08 19:07:03.589414678 +0000 UTC m=+927.237457378" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.709798 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.720782 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.740303 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.748898 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.748948 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.748983 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj9b4\" (UniqueName: \"kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.849861 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.849907 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.849937 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fj9b4\" (UniqueName: \"kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.850708 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.850937 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:03 crc kubenswrapper[4998]: I1208 19:07:03.887700 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fj9b4\" (UniqueName: \"kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4\") pod \"community-operators-9pvtd\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.055302 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.384956 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.549250 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerStarted","Data":"d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf"} Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.551853 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad"} Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.563896 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc"} Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.574121 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db"} Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.579390 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" podStartSLOduration=7.329041359 podStartE2EDuration="21.579373371s" podCreationTimestamp="2025-12-08 19:06:43 +0000 UTC" firstStartedPulling="2025-12-08 19:06:49.757550699 +0000 UTC m=+913.405593389" lastFinishedPulling="2025-12-08 19:07:04.007882711 +0000 UTC m=+927.655925401" observedRunningTime="2025-12-08 19:07:04.575099435 +0000 UTC m=+928.223142125" watchObservedRunningTime="2025-12-08 19:07:04.579373371 +0000 UTC m=+928.227416061" Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.579448 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerStarted","Data":"3b95fecc6f26942ef3c8c8b779c762b54fba208110bd6bb6a834a5352574b26e"} Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.599335 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" podStartSLOduration=3.48983367 podStartE2EDuration="40.59931964s" podCreationTimestamp="2025-12-08 19:06:24 +0000 UTC" firstStartedPulling="2025-12-08 19:06:26.803875536 +0000 UTC m=+890.451918226" lastFinishedPulling="2025-12-08 19:07:03.913361506 +0000 UTC m=+927.561404196" observedRunningTime="2025-12-08 19:07:04.596844364 +0000 UTC m=+928.244887054" watchObservedRunningTime="2025-12-08 19:07:04.59931964 +0000 UTC m=+928.247362330" Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.672459 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" podStartSLOduration=6.496877122 podStartE2EDuration="32.672442738s" podCreationTimestamp="2025-12-08 19:06:32 +0000 UTC" firstStartedPulling="2025-12-08 19:06:37.793774744 +0000 UTC m=+901.441817434" lastFinishedPulling="2025-12-08 19:07:03.96934036 +0000 UTC m=+927.617383050" observedRunningTime="2025-12-08 19:07:04.670638498 +0000 UTC m=+928.318681178" watchObservedRunningTime="2025-12-08 19:07:04.672442738 +0000 UTC m=+928.320485428" Dec 08 19:07:04 crc kubenswrapper[4998]: I1208 19:07:04.673618 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" podStartSLOduration=10.425455611 podStartE2EDuration="36.673611999s" podCreationTimestamp="2025-12-08 19:06:28 +0000 UTC" firstStartedPulling="2025-12-08 19:06:37.700184054 +0000 UTC m=+901.348226744" lastFinishedPulling="2025-12-08 19:07:03.948340452 +0000 UTC m=+927.596383132" observedRunningTime="2025-12-08 19:07:04.636640509 +0000 UTC m=+928.284683219" watchObservedRunningTime="2025-12-08 19:07:04.673611999 +0000 UTC m=+928.321654689" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.591363 4998 generic.go:358] "Generic (PLEG): container finished" podID="86068142-4ee8-4b18-bc23-3200d3517caf" containerID="d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad" exitCode=0 Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.592174 4998 scope.go:117] "RemoveContainer" containerID="d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad" Dec 08 19:07:05 crc kubenswrapper[4998]: E1208 19:07:05.592448 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx_service-telemetry(86068142-4ee8-4b18-bc23-3200d3517caf)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" podUID="86068142-4ee8-4b18-bc23-3200d3517caf" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.594863 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerDied","Data":"d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.594903 4998 scope.go:117] "RemoveContainer" containerID="daa62999e73fa8270e4fa435026101fc0bf595309643a1915d6f14840fb65d26" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.596249 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerStarted","Data":"defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.601391 4998 generic.go:358] "Generic (PLEG): container finished" podID="928ecc36-3576-4c8c-895c-4ba1ce39c5dc" containerID="805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc" exitCode=0 Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.601492 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerDied","Data":"805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.601819 4998 scope.go:117] "RemoveContainer" containerID="805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc" Dec 08 19:07:05 crc kubenswrapper[4998]: E1208 19:07:05.602000 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr_service-telemetry(928ecc36-3576-4c8c-895c-4ba1ce39c5dc)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" podUID="928ecc36-3576-4c8c-895c-4ba1ce39c5dc" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.605107 4998 generic.go:358] "Generic (PLEG): container finished" podID="6330b91f-03e0-49bb-a002-50fc85497f4c" containerID="5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db" exitCode=0 Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.605251 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerDied","Data":"5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.605569 4998 scope.go:117] "RemoveContainer" containerID="5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db" Dec 08 19:07:05 crc kubenswrapper[4998]: E1208 19:07:05.605772 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-64t4w_service-telemetry(6330b91f-03e0-49bb-a002-50fc85497f4c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" podUID="6330b91f-03e0-49bb-a002-50fc85497f4c" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.607279 4998 generic.go:358] "Generic (PLEG): container finished" podID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerID="9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12" exitCode=0 Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.607362 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerDied","Data":"9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.636574 4998 generic.go:358] "Generic (PLEG): container finished" podID="4c71e6da-9b75-4899-af79-4693ea2e0afe" containerID="d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf" exitCode=0 Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.636648 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerDied","Data":"d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf"} Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.638005 4998 scope.go:117] "RemoveContainer" containerID="d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf" Dec 08 19:07:05 crc kubenswrapper[4998]: E1208 19:07:05.638495 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c_service-telemetry(4c71e6da-9b75-4899-af79-4693ea2e0afe)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" podUID="4c71e6da-9b75-4899-af79-4693ea2e0afe" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.665371 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" podStartSLOduration=6.142671136 podStartE2EDuration="20.6653551s" podCreationTimestamp="2025-12-08 19:06:45 +0000 UTC" firstStartedPulling="2025-12-08 19:06:49.835679932 +0000 UTC m=+913.483722622" lastFinishedPulling="2025-12-08 19:07:04.358363896 +0000 UTC m=+928.006406586" observedRunningTime="2025-12-08 19:07:05.661982889 +0000 UTC m=+929.310025579" watchObservedRunningTime="2025-12-08 19:07:05.6653551 +0000 UTC m=+929.313397790" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.690931 4998 scope.go:117] "RemoveContainer" containerID="813bb81fbeb5f34114ba0f9a792f89c1a56fe3214fd3540a22d7bdf247882735" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.779375 4998 scope.go:117] "RemoveContainer" containerID="2d39a6180a0356b70e1ad07daf4b67e088ca77b372791580337db9e3d732d906" Dec 08 19:07:05 crc kubenswrapper[4998]: I1208 19:07:05.903591 4998 scope.go:117] "RemoveContainer" containerID="86cd2f7edd61b97cff27ac9b009ee69fb2ea0af167ce7858e2ccaa3454044508" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.646829 4998 scope.go:117] "RemoveContainer" containerID="d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad" Dec 08 19:07:06 crc kubenswrapper[4998]: E1208 19:07:06.647404 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx_service-telemetry(86068142-4ee8-4b18-bc23-3200d3517caf)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" podUID="86068142-4ee8-4b18-bc23-3200d3517caf" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.648868 4998 generic.go:358] "Generic (PLEG): container finished" podID="c2891421-e1ac-441a-8703-dadd0ac37e8f" containerID="defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476" exitCode=0 Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.648949 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerDied","Data":"defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476"} Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.649085 4998 scope.go:117] "RemoveContainer" containerID="cf92673b8231ee1f3f3c8e163c9afcb7dc57eed131bac30a2fba8727c66cd9e8" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.649247 4998 scope.go:117] "RemoveContainer" containerID="defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476" Dec 08 19:07:06 crc kubenswrapper[4998]: E1208 19:07:06.649456 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2_service-telemetry(c2891421-e1ac-441a-8703-dadd0ac37e8f)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" podUID="c2891421-e1ac-441a-8703-dadd0ac37e8f" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.652461 4998 scope.go:117] "RemoveContainer" containerID="805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc" Dec 08 19:07:06 crc kubenswrapper[4998]: E1208 19:07:06.652642 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr_service-telemetry(928ecc36-3576-4c8c-895c-4ba1ce39c5dc)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" podUID="928ecc36-3576-4c8c-895c-4ba1ce39c5dc" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.677552 4998 scope.go:117] "RemoveContainer" containerID="5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db" Dec 08 19:07:06 crc kubenswrapper[4998]: E1208 19:07:06.677935 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-787645d794-64t4w_service-telemetry(6330b91f-03e0-49bb-a002-50fc85497f4c)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" podUID="6330b91f-03e0-49bb-a002-50fc85497f4c" Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.686294 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerStarted","Data":"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700"} Dec 08 19:07:06 crc kubenswrapper[4998]: I1208 19:07:06.691960 4998 scope.go:117] "RemoveContainer" containerID="d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf" Dec 08 19:07:06 crc kubenswrapper[4998]: E1208 19:07:06.692494 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c_service-telemetry(4c71e6da-9b75-4899-af79-4693ea2e0afe)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" podUID="4c71e6da-9b75-4899-af79-4693ea2e0afe" Dec 08 19:07:07 crc kubenswrapper[4998]: I1208 19:07:07.702778 4998 generic.go:358] "Generic (PLEG): container finished" podID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerID="4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700" exitCode=0 Dec 08 19:07:07 crc kubenswrapper[4998]: I1208 19:07:07.702883 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerDied","Data":"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700"} Dec 08 19:07:07 crc kubenswrapper[4998]: I1208 19:07:07.710146 4998 scope.go:117] "RemoveContainer" containerID="defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476" Dec 08 19:07:07 crc kubenswrapper[4998]: E1208 19:07:07.710611 4998 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2_service-telemetry(c2891421-e1ac-441a-8703-dadd0ac37e8f)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" podUID="c2891421-e1ac-441a-8703-dadd0ac37e8f" Dec 08 19:07:08 crc kubenswrapper[4998]: I1208 19:07:08.717189 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerStarted","Data":"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4"} Dec 08 19:07:08 crc kubenswrapper[4998]: I1208 19:07:08.740881 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9pvtd" podStartSLOduration=4.9828879839999995 podStartE2EDuration="5.740860195s" podCreationTimestamp="2025-12-08 19:07:03 +0000 UTC" firstStartedPulling="2025-12-08 19:07:05.607872707 +0000 UTC m=+929.255915397" lastFinishedPulling="2025-12-08 19:07:06.365844908 +0000 UTC m=+930.013887608" observedRunningTime="2025-12-08 19:07:08.73621325 +0000 UTC m=+932.384255930" watchObservedRunningTime="2025-12-08 19:07:08.740860195 +0000 UTC m=+932.388902885" Dec 08 19:07:14 crc kubenswrapper[4998]: I1208 19:07:14.055921 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:14 crc kubenswrapper[4998]: I1208 19:07:14.056468 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:14 crc kubenswrapper[4998]: I1208 19:07:14.105810 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:14 crc kubenswrapper[4998]: I1208 19:07:14.809131 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:14 crc kubenswrapper[4998]: I1208 19:07:14.869173 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:16 crc kubenswrapper[4998]: I1208 19:07:16.777388 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9pvtd" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="registry-server" containerID="cri-o://8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4" gracePeriod=2 Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.279412 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.340730 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fj9b4\" (UniqueName: \"kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4\") pod \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.340912 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content\") pod \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.341077 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities\") pod \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\" (UID: \"0a5fecfb-de42-4870-b4d9-fe82d0ae3122\") " Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.347158 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities" (OuterVolumeSpecName: "utilities") pod "0a5fecfb-de42-4870-b4d9-fe82d0ae3122" (UID: "0a5fecfb-de42-4870-b4d9-fe82d0ae3122"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.366596 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4" (OuterVolumeSpecName: "kube-api-access-fj9b4") pod "0a5fecfb-de42-4870-b4d9-fe82d0ae3122" (UID: "0a5fecfb-de42-4870-b4d9-fe82d0ae3122"). InnerVolumeSpecName "kube-api-access-fj9b4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.387845 4998 scope.go:117] "RemoveContainer" containerID="5ee3ee4395dee74e808576ca4fc52fffe00778e6ede27941a17cb69af9f191db" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.475947 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.475988 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fj9b4\" (UniqueName: \"kubernetes.io/projected/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-kube-api-access-fj9b4\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.505011 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0a5fecfb-de42-4870-b4d9-fe82d0ae3122" (UID: "0a5fecfb-de42-4870-b4d9-fe82d0ae3122"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.577246 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0a5fecfb-de42-4870-b4d9-fe82d0ae3122-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.787535 4998 generic.go:358] "Generic (PLEG): container finished" podID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerID="8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4" exitCode=0 Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.787600 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerDied","Data":"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4"} Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.788594 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9pvtd" event={"ID":"0a5fecfb-de42-4870-b4d9-fe82d0ae3122","Type":"ContainerDied","Data":"3b95fecc6f26942ef3c8c8b779c762b54fba208110bd6bb6a834a5352574b26e"} Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.787670 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9pvtd" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.788622 4998 scope.go:117] "RemoveContainer" containerID="8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.830053 4998 scope.go:117] "RemoveContainer" containerID="4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.839507 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.849643 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9pvtd"] Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.868944 4998 scope.go:117] "RemoveContainer" containerID="9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.920765 4998 scope.go:117] "RemoveContainer" containerID="8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4" Dec 08 19:07:17 crc kubenswrapper[4998]: E1208 19:07:17.924026 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4\": container with ID starting with 8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4 not found: ID does not exist" containerID="8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.924097 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4"} err="failed to get container status \"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4\": rpc error: code = NotFound desc = could not find container \"8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4\": container with ID starting with 8ac6e5889d661ea7fb668532f9967d5d9134a3e7e2a3e062204d9d93516855c4 not found: ID does not exist" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.924169 4998 scope.go:117] "RemoveContainer" containerID="4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700" Dec 08 19:07:17 crc kubenswrapper[4998]: E1208 19:07:17.925424 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700\": container with ID starting with 4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700 not found: ID does not exist" containerID="4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.925460 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700"} err="failed to get container status \"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700\": rpc error: code = NotFound desc = could not find container \"4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700\": container with ID starting with 4fc3447e1054b1438e5795805b503cda8936f614c0dd149d1d9e704d64783700 not found: ID does not exist" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.925480 4998 scope.go:117] "RemoveContainer" containerID="9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12" Dec 08 19:07:17 crc kubenswrapper[4998]: E1208 19:07:17.925880 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12\": container with ID starting with 9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12 not found: ID does not exist" containerID="9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12" Dec 08 19:07:17 crc kubenswrapper[4998]: I1208 19:07:17.925909 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12"} err="failed to get container status \"9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12\": rpc error: code = NotFound desc = could not find container \"9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12\": container with ID starting with 9d2d39a001161888203496d0ae6eae325050d8b2030c5e1981b196a8a0c7ca12 not found: ID does not exist" Dec 08 19:07:18 crc kubenswrapper[4998]: I1208 19:07:18.366433 4998 scope.go:117] "RemoveContainer" containerID="defc79f415ef095f5389bb57417f244f950e558929607967128ae0c23c07f476" Dec 08 19:07:18 crc kubenswrapper[4998]: I1208 19:07:18.816435 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-787645d794-64t4w" event={"ID":"6330b91f-03e0-49bb-a002-50fc85497f4c","Type":"ContainerStarted","Data":"a9c7a26db2fe0641c4818b26b7bbfc423e376dccbfe96e0fa6fa9236de65582d"} Dec 08 19:07:19 crc kubenswrapper[4998]: I1208 19:07:19.366230 4998 scope.go:117] "RemoveContainer" containerID="805b6c5c311acaaaba0ec0f1310cd6ebcc860abd17da90456fe6c48f27903efc" Dec 08 19:07:19 crc kubenswrapper[4998]: I1208 19:07:19.367384 4998 scope.go:117] "RemoveContainer" containerID="d16814a5667a6fedae383b37c287382a13dec56e60a34e0c84ba3b19adc535ad" Dec 08 19:07:19 crc kubenswrapper[4998]: I1208 19:07:19.375571 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" path="/var/lib/kubelet/pods/0a5fecfb-de42-4870-b4d9-fe82d0ae3122/volumes" Dec 08 19:07:19 crc kubenswrapper[4998]: I1208 19:07:19.894325 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-c97c795d5-grtf2" event={"ID":"c2891421-e1ac-441a-8703-dadd0ac37e8f","Type":"ContainerStarted","Data":"4682577cdc99998b6c49facd95518ac68fdbc615120208b5d83b4cea65e8aeda"} Dec 08 19:07:20 crc kubenswrapper[4998]: I1208 19:07:20.367537 4998 scope.go:117] "RemoveContainer" containerID="d6ba341654660db566584c8d90ed4d47b3ee38ef8e6994ff9715023253d846cf" Dec 08 19:07:20 crc kubenswrapper[4998]: I1208 19:07:20.908409 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-545b564d9f-bqhjx" event={"ID":"86068142-4ee8-4b18-bc23-3200d3517caf","Type":"ContainerStarted","Data":"dee918d5f50803e3de33626b6c7f1d4f4ac62d195660fe5aef65b5c682409c51"} Dec 08 19:07:20 crc kubenswrapper[4998]: I1208 19:07:20.918595 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-66d5b7c5fc-l29gr" event={"ID":"928ecc36-3576-4c8c-895c-4ba1ce39c5dc","Type":"ContainerStarted","Data":"4322087745f2c4e800643c7f6af57513b0ff177a1b4c0e9498005efa50881d76"} Dec 08 19:07:20 crc kubenswrapper[4998]: I1208 19:07:20.922254 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-d6bb559d7-ncb5c" event={"ID":"4c71e6da-9b75-4899-af79-4693ea2e0afe","Type":"ContainerStarted","Data":"c0d82503eeff3d110dce86f031ec6672926a69b9dfcbb50069667a88bc67f156"} Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.753589 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755618 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="extract-utilities" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755665 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="extract-utilities" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755706 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="registry-server" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755713 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="registry-server" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755752 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="extract-content" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755760 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="extract-content" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.755926 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="0a5fecfb-de42-4870-b4d9-fe82d0ae3122" containerName="registry-server" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.766480 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.782261 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.946989 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnzhk\" (UniqueName: \"kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.947089 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:25 crc kubenswrapper[4998]: I1208 19:07:25.947539 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.049305 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lnzhk\" (UniqueName: \"kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.049407 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.049478 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.050003 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.050023 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.074667 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnzhk\" (UniqueName: \"kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk\") pod \"certified-operators-5cnk5\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.089229 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.442735 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:26 crc kubenswrapper[4998]: I1208 19:07:26.961951 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerStarted","Data":"4f30e358e7c4b8e06ccf57abbeb8dff5e8f45dcaebd2a93cb12f1857b0ccc013"} Dec 08 19:07:27 crc kubenswrapper[4998]: I1208 19:07:27.972791 4998 generic.go:358] "Generic (PLEG): container finished" podID="38eaf202-fddf-46dd-ab7a-859415be736e" containerID="349f486324c6b077ef3b0d498724baa7c6e7da019858b7cad78203ee1760e7e9" exitCode=0 Dec 08 19:07:27 crc kubenswrapper[4998]: I1208 19:07:27.973346 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerDied","Data":"349f486324c6b077ef3b0d498724baa7c6e7da019858b7cad78203ee1760e7e9"} Dec 08 19:07:28 crc kubenswrapper[4998]: I1208 19:07:28.983059 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerStarted","Data":"3da1f89a90c7dd52fdcbfc4f0105d24dc455c8d7f0d40d6e6d3db58f7666f03e"} Dec 08 19:07:30 crc kubenswrapper[4998]: I1208 19:07:30.008107 4998 generic.go:358] "Generic (PLEG): container finished" podID="38eaf202-fddf-46dd-ab7a-859415be736e" containerID="3da1f89a90c7dd52fdcbfc4f0105d24dc455c8d7f0d40d6e6d3db58f7666f03e" exitCode=0 Dec 08 19:07:30 crc kubenswrapper[4998]: I1208 19:07:30.008385 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerDied","Data":"3da1f89a90c7dd52fdcbfc4f0105d24dc455c8d7f0d40d6e6d3db58f7666f03e"} Dec 08 19:07:31 crc kubenswrapper[4998]: I1208 19:07:31.019465 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerStarted","Data":"4a581434344b46b239e3a4de9d8e9c629597864fa2e81e26fe2b913db6195977"} Dec 08 19:07:31 crc kubenswrapper[4998]: I1208 19:07:31.038267 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5cnk5" podStartSLOduration=5.339832683 podStartE2EDuration="6.038251044s" podCreationTimestamp="2025-12-08 19:07:25 +0000 UTC" firstStartedPulling="2025-12-08 19:07:27.974851757 +0000 UTC m=+951.622894447" lastFinishedPulling="2025-12-08 19:07:28.673270118 +0000 UTC m=+952.321312808" observedRunningTime="2025-12-08 19:07:31.037668599 +0000 UTC m=+954.685711289" watchObservedRunningTime="2025-12-08 19:07:31.038251044 +0000 UTC m=+954.686293734" Dec 08 19:07:31 crc kubenswrapper[4998]: I1208 19:07:31.233397 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:07:31 crc kubenswrapper[4998]: I1208 19:07:31.233491 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.151587 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.417450 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.424089 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.425599 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.427584 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.459670 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/52516d2d-ab6c-4696-97ad-dc2c6af49a45-qdr-test-config\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.459793 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/52516d2d-ab6c-4696-97ad-dc2c6af49a45-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.459963 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlft\" (UniqueName: \"kubernetes.io/projected/52516d2d-ab6c-4696-97ad-dc2c6af49a45-kube-api-access-4mlft\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.561249 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/52516d2d-ab6c-4696-97ad-dc2c6af49a45-qdr-test-config\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.561315 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/52516d2d-ab6c-4696-97ad-dc2c6af49a45-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.561394 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlft\" (UniqueName: \"kubernetes.io/projected/52516d2d-ab6c-4696-97ad-dc2c6af49a45-kube-api-access-4mlft\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.562735 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/52516d2d-ab6c-4696-97ad-dc2c6af49a45-qdr-test-config\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.577658 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/52516d2d-ab6c-4696-97ad-dc2c6af49a45-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.585237 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlft\" (UniqueName: \"kubernetes.io/projected/52516d2d-ab6c-4696-97ad-dc2c6af49a45-kube-api-access-4mlft\") pod \"qdr-test\" (UID: \"52516d2d-ab6c-4696-97ad-dc2c6af49a45\") " pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.736209 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Dec 08 19:07:33 crc kubenswrapper[4998]: I1208 19:07:33.991916 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Dec 08 19:07:34 crc kubenswrapper[4998]: I1208 19:07:34.048792 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"52516d2d-ab6c-4696-97ad-dc2c6af49a45","Type":"ContainerStarted","Data":"6eca45d301d8dc5c487a353653cb3ce204f74845468ab17e5eee471816f19e04"} Dec 08 19:07:36 crc kubenswrapper[4998]: I1208 19:07:36.090250 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:36 crc kubenswrapper[4998]: I1208 19:07:36.090610 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:36 crc kubenswrapper[4998]: I1208 19:07:36.142988 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:37 crc kubenswrapper[4998]: I1208 19:07:37.176507 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:37 crc kubenswrapper[4998]: I1208 19:07:37.252010 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:39 crc kubenswrapper[4998]: I1208 19:07:39.095234 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5cnk5" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="registry-server" containerID="cri-o://4a581434344b46b239e3a4de9d8e9c629597864fa2e81e26fe2b913db6195977" gracePeriod=2 Dec 08 19:07:40 crc kubenswrapper[4998]: I1208 19:07:40.106481 4998 generic.go:358] "Generic (PLEG): container finished" podID="38eaf202-fddf-46dd-ab7a-859415be736e" containerID="4a581434344b46b239e3a4de9d8e9c629597864fa2e81e26fe2b913db6195977" exitCode=0 Dec 08 19:07:40 crc kubenswrapper[4998]: I1208 19:07:40.106997 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerDied","Data":"4a581434344b46b239e3a4de9d8e9c629597864fa2e81e26fe2b913db6195977"} Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.251796 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.360319 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lnzhk\" (UniqueName: \"kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk\") pod \"38eaf202-fddf-46dd-ab7a-859415be736e\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.360444 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities\") pod \"38eaf202-fddf-46dd-ab7a-859415be736e\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.360511 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content\") pod \"38eaf202-fddf-46dd-ab7a-859415be736e\" (UID: \"38eaf202-fddf-46dd-ab7a-859415be736e\") " Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.361310 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities" (OuterVolumeSpecName: "utilities") pod "38eaf202-fddf-46dd-ab7a-859415be736e" (UID: "38eaf202-fddf-46dd-ab7a-859415be736e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.366611 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk" (OuterVolumeSpecName: "kube-api-access-lnzhk") pod "38eaf202-fddf-46dd-ab7a-859415be736e" (UID: "38eaf202-fddf-46dd-ab7a-859415be736e"). InnerVolumeSpecName "kube-api-access-lnzhk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.396623 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38eaf202-fddf-46dd-ab7a-859415be736e" (UID: "38eaf202-fddf-46dd-ab7a-859415be736e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.464003 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.464040 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38eaf202-fddf-46dd-ab7a-859415be736e-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:45 crc kubenswrapper[4998]: I1208 19:07:45.464050 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lnzhk\" (UniqueName: \"kubernetes.io/projected/38eaf202-fddf-46dd-ab7a-859415be736e-kube-api-access-lnzhk\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.171132 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5cnk5" event={"ID":"38eaf202-fddf-46dd-ab7a-859415be736e","Type":"ContainerDied","Data":"4f30e358e7c4b8e06ccf57abbeb8dff5e8f45dcaebd2a93cb12f1857b0ccc013"} Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.171168 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5cnk5" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.171270 4998 scope.go:117] "RemoveContainer" containerID="4a581434344b46b239e3a4de9d8e9c629597864fa2e81e26fe2b913db6195977" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.173826 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"52516d2d-ab6c-4696-97ad-dc2c6af49a45","Type":"ContainerStarted","Data":"0141f4282545246013a273f7441d749142a0ab95116cee6ea89f6a264f19a01f"} Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.195784 4998 scope.go:117] "RemoveContainer" containerID="3da1f89a90c7dd52fdcbfc4f0105d24dc455c8d7f0d40d6e6d3db58f7666f03e" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.216851 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=1.961395005 podStartE2EDuration="13.216830661s" podCreationTimestamp="2025-12-08 19:07:33 +0000 UTC" firstStartedPulling="2025-12-08 19:07:34.01019247 +0000 UTC m=+957.658235160" lastFinishedPulling="2025-12-08 19:07:45.265628126 +0000 UTC m=+968.913670816" observedRunningTime="2025-12-08 19:07:46.212882314 +0000 UTC m=+969.860925004" watchObservedRunningTime="2025-12-08 19:07:46.216830661 +0000 UTC m=+969.864873351" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.226799 4998 scope.go:117] "RemoveContainer" containerID="349f486324c6b077ef3b0d498724baa7c6e7da019858b7cad78203ee1760e7e9" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.241414 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.252525 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5cnk5"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.494304 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-jbt69"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495048 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="extract-utilities" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495100 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="extract-utilities" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495119 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="registry-server" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495125 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="registry-server" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495147 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="extract-content" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495152 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="extract-content" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.495264 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" containerName="registry-server" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.500421 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.503366 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.503894 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.504285 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.504438 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.504576 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.504710 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.505315 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-jbt69"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.681740 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682127 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682223 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682374 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682433 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682486 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7rtx\" (UniqueName: \"kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.682730 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.783960 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784025 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784261 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784333 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784364 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7rtx\" (UniqueName: \"kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784453 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.784538 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.785385 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.785443 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.785510 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.785608 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.785846 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.786133 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.802356 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7rtx\" (UniqueName: \"kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx\") pod \"stf-smoketest-smoke1-jbt69\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.816829 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.887271 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.896427 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.903456 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 19:07:46 crc kubenswrapper[4998]: I1208 19:07:46.990654 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmck6\" (UniqueName: \"kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6\") pod \"curl\" (UID: \"666a1ff3-3005-44df-97c5-48bd1d5457cc\") " pod="service-telemetry/curl" Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.092064 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mmck6\" (UniqueName: \"kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6\") pod \"curl\" (UID: \"666a1ff3-3005-44df-97c5-48bd1d5457cc\") " pod="service-telemetry/curl" Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.108914 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmck6\" (UniqueName: \"kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6\") pod \"curl\" (UID: \"666a1ff3-3005-44df-97c5-48bd1d5457cc\") " pod="service-telemetry/curl" Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.213890 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.291511 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-jbt69"] Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.387989 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38eaf202-fddf-46dd-ab7a-859415be736e" path="/var/lib/kubelet/pods/38eaf202-fddf-46dd-ab7a-859415be736e/volumes" Dec 08 19:07:47 crc kubenswrapper[4998]: I1208 19:07:47.663063 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Dec 08 19:07:47 crc kubenswrapper[4998]: W1208 19:07:47.670494 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod666a1ff3_3005_44df_97c5_48bd1d5457cc.slice/crio-b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86 WatchSource:0}: Error finding container b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86: Status 404 returned error can't find the container with id b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86 Dec 08 19:07:48 crc kubenswrapper[4998]: I1208 19:07:48.200652 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerStarted","Data":"c6581be01326fc33a2028b520a3d54678c1c0320c1d1d298f153ac9722108a40"} Dec 08 19:07:48 crc kubenswrapper[4998]: I1208 19:07:48.203180 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"666a1ff3-3005-44df-97c5-48bd1d5457cc","Type":"ContainerStarted","Data":"b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86"} Dec 08 19:07:50 crc kubenswrapper[4998]: I1208 19:07:50.228832 4998 generic.go:358] "Generic (PLEG): container finished" podID="666a1ff3-3005-44df-97c5-48bd1d5457cc" containerID="476f74248e5d9429b79858cc1750e4ec2a61dbfac375e92b2f698c684b14984c" exitCode=0 Dec 08 19:07:50 crc kubenswrapper[4998]: I1208 19:07:50.228912 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"666a1ff3-3005-44df-97c5-48bd1d5457cc","Type":"ContainerDied","Data":"476f74248e5d9429b79858cc1750e4ec2a61dbfac375e92b2f698c684b14984c"} Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.362004 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.437821 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mmck6\" (UniqueName: \"kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6\") pod \"666a1ff3-3005-44df-97c5-48bd1d5457cc\" (UID: \"666a1ff3-3005-44df-97c5-48bd1d5457cc\") " Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.453264 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6" (OuterVolumeSpecName: "kube-api-access-mmck6") pod "666a1ff3-3005-44df-97c5-48bd1d5457cc" (UID: "666a1ff3-3005-44df-97c5-48bd1d5457cc"). InnerVolumeSpecName "kube-api-access-mmck6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.516980 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38856: no serving certificate available for the kubelet" Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.540209 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mmck6\" (UniqueName: \"kubernetes.io/projected/666a1ff3-3005-44df-97c5-48bd1d5457cc-kube-api-access-mmck6\") on node \"crc\" DevicePath \"\"" Dec 08 19:07:53 crc kubenswrapper[4998]: I1208 19:07:53.741057 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38870: no serving certificate available for the kubelet" Dec 08 19:07:54 crc kubenswrapper[4998]: I1208 19:07:54.270606 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"666a1ff3-3005-44df-97c5-48bd1d5457cc","Type":"ContainerDied","Data":"b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86"} Dec 08 19:07:54 crc kubenswrapper[4998]: I1208 19:07:54.271000 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b85314e80e063eb3418756fdaab8921ac132569e35df926d92593e4398942b86" Dec 08 19:07:54 crc kubenswrapper[4998]: I1208 19:07:54.270635 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Dec 08 19:08:00 crc kubenswrapper[4998]: I1208 19:08:00.322540 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerStarted","Data":"20fd399f0e84a7af62ea52d5dfac38070ecf6deb3bd8de0ac6da6b1db60a1f0b"} Dec 08 19:08:01 crc kubenswrapper[4998]: I1208 19:08:01.233404 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:08:01 crc kubenswrapper[4998]: I1208 19:08:01.233996 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:08:09 crc kubenswrapper[4998]: I1208 19:08:09.497806 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerStarted","Data":"8fca300f2d28b8c0697e76cc75470adb1fb0b6bfdb09877200ec453681ddd88c"} Dec 08 19:08:09 crc kubenswrapper[4998]: I1208 19:08:09.532093 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-jbt69" podStartSLOduration=2.282638811 podStartE2EDuration="23.532066976s" podCreationTimestamp="2025-12-08 19:07:46 +0000 UTC" firstStartedPulling="2025-12-08 19:07:47.307853026 +0000 UTC m=+970.955895716" lastFinishedPulling="2025-12-08 19:08:08.557281181 +0000 UTC m=+992.205323881" observedRunningTime="2025-12-08 19:08:09.519062604 +0000 UTC m=+993.167105314" watchObservedRunningTime="2025-12-08 19:08:09.532066976 +0000 UTC m=+993.180109676" Dec 08 19:08:20 crc kubenswrapper[4998]: E1208 19:08:20.865386 4998 certificate_manager.go:613] "Certificate request was not signed" err="timed out waiting for the condition" logger="kubernetes.io/kubelet-serving.UnhandledError" Dec 08 19:08:22 crc kubenswrapper[4998]: I1208 19:08:22.892776 4998 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Dec 08 19:08:22 crc kubenswrapper[4998]: I1208 19:08:22.903881 4998 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Dec 08 19:08:22 crc kubenswrapper[4998]: I1208 19:08:22.926046 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38120: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[4998]: I1208 19:08:22.961399 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38136: no serving certificate available for the kubelet" Dec 08 19:08:22 crc kubenswrapper[4998]: I1208 19:08:22.998840 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38150: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.046921 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38164: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.110510 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38172: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.221111 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38188: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.411718 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38198: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.795198 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38206: no serving certificate available for the kubelet" Dec 08 19:08:23 crc kubenswrapper[4998]: I1208 19:08:23.869101 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38214: no serving certificate available for the kubelet" Dec 08 19:08:24 crc kubenswrapper[4998]: I1208 19:08:24.462590 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38230: no serving certificate available for the kubelet" Dec 08 19:08:25 crc kubenswrapper[4998]: I1208 19:08:25.767545 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38238: no serving certificate available for the kubelet" Dec 08 19:08:28 crc kubenswrapper[4998]: I1208 19:08:28.353543 4998 ???:1] "http: TLS handshake error from 192.168.126.11:38240: no serving certificate available for the kubelet" Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.233723 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.234347 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.234436 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.235431 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.235510 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8" gracePeriod=600 Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.671954 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8" exitCode=0 Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.672026 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8"} Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.672335 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca"} Dec 08 19:08:31 crc kubenswrapper[4998]: I1208 19:08:31.672365 4998 scope.go:117] "RemoveContainer" containerID="244aa3c38fd1050a3c3363d7b092b6291688366b9c539b044db265cb9764a791" Dec 08 19:08:33 crc kubenswrapper[4998]: I1208 19:08:33.508367 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42736: no serving certificate available for the kubelet" Dec 08 19:08:34 crc kubenswrapper[4998]: I1208 19:08:34.702530 4998 generic.go:358] "Generic (PLEG): container finished" podID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerID="20fd399f0e84a7af62ea52d5dfac38070ecf6deb3bd8de0ac6da6b1db60a1f0b" exitCode=0 Dec 08 19:08:34 crc kubenswrapper[4998]: I1208 19:08:34.702619 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerDied","Data":"20fd399f0e84a7af62ea52d5dfac38070ecf6deb3bd8de0ac6da6b1db60a1f0b"} Dec 08 19:08:34 crc kubenswrapper[4998]: I1208 19:08:34.703671 4998 scope.go:117] "RemoveContainer" containerID="20fd399f0e84a7af62ea52d5dfac38070ecf6deb3bd8de0ac6da6b1db60a1f0b" Dec 08 19:08:41 crc kubenswrapper[4998]: I1208 19:08:41.764550 4998 generic.go:358] "Generic (PLEG): container finished" podID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerID="8fca300f2d28b8c0697e76cc75470adb1fb0b6bfdb09877200ec453681ddd88c" exitCode=0 Dec 08 19:08:41 crc kubenswrapper[4998]: I1208 19:08:41.765902 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerDied","Data":"8fca300f2d28b8c0697e76cc75470adb1fb0b6bfdb09877200ec453681ddd88c"} Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.040583 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148142 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148198 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148266 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148307 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148354 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148628 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c7rtx\" (UniqueName: \"kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.148726 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script\") pod \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\" (UID: \"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe\") " Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.155053 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx" (OuterVolumeSpecName: "kube-api-access-c7rtx") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "kube-api-access-c7rtx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.168122 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.171546 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.177727 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.180158 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.188875 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.195482 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" (UID: "0878da2f-3188-47a0-a7c6-d4ab55a5dcfe"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.250945 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c7rtx\" (UniqueName: \"kubernetes.io/projected/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-kube-api-access-c7rtx\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.250992 4998 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.251003 4998 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-healthcheck-log\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.251012 4998 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.251023 4998 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-collectd-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.251030 4998 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.251039 4998 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/0878da2f-3188-47a0-a7c6-d4ab55a5dcfe-sensubility-config\") on node \"crc\" DevicePath \"\"" Dec 08 19:08:43 crc kubenswrapper[4998]: E1208 19:08:43.421218 4998 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0878da2f_3188_47a0_a7c6_d4ab55a5dcfe.slice\": RecentStats: unable to find data in memory cache]" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.773782 4998 ???:1] "http: TLS handshake error from 192.168.126.11:40012: no serving certificate available for the kubelet" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.788142 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-jbt69" Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.788174 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-jbt69" event={"ID":"0878da2f-3188-47a0-a7c6-d4ab55a5dcfe","Type":"ContainerDied","Data":"c6581be01326fc33a2028b520a3d54678c1c0320c1d1d298f153ac9722108a40"} Dec 08 19:08:43 crc kubenswrapper[4998]: I1208 19:08:43.788246 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6581be01326fc33a2028b520a3d54678c1c0320c1d1d298f153ac9722108a40" Dec 08 19:08:54 crc kubenswrapper[4998]: I1208 19:08:54.009379 4998 ???:1] "http: TLS handshake error from 192.168.126.11:52790: no serving certificate available for the kubelet" Dec 08 19:09:04 crc kubenswrapper[4998]: I1208 19:09:04.282592 4998 ???:1] "http: TLS handshake error from 192.168.126.11:48076: no serving certificate available for the kubelet" Dec 08 19:09:24 crc kubenswrapper[4998]: I1208 19:09:24.153356 4998 ???:1] "http: TLS handshake error from 192.168.126.11:37754: no serving certificate available for the kubelet" Dec 08 19:09:45 crc kubenswrapper[4998]: I1208 19:09:45.275591 4998 ???:1] "http: TLS handshake error from 192.168.126.11:55492: no serving certificate available for the kubelet" Dec 08 19:09:54 crc kubenswrapper[4998]: I1208 19:09:54.288411 4998 ???:1] "http: TLS handshake error from 192.168.126.11:43102: no serving certificate available for the kubelet" Dec 08 19:10:25 crc kubenswrapper[4998]: I1208 19:10:25.266824 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33464: no serving certificate available for the kubelet" Dec 08 19:10:25 crc kubenswrapper[4998]: I1208 19:10:25.491639 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33468: no serving certificate available for the kubelet" Dec 08 19:10:25 crc kubenswrapper[4998]: I1208 19:10:25.721068 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33480: no serving certificate available for the kubelet" Dec 08 19:10:25 crc kubenswrapper[4998]: I1208 19:10:25.960209 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33486: no serving certificate available for the kubelet" Dec 08 19:10:26 crc kubenswrapper[4998]: I1208 19:10:26.185862 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33494: no serving certificate available for the kubelet" Dec 08 19:10:26 crc kubenswrapper[4998]: I1208 19:10:26.400159 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33502: no serving certificate available for the kubelet" Dec 08 19:10:26 crc kubenswrapper[4998]: I1208 19:10:26.632221 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33506: no serving certificate available for the kubelet" Dec 08 19:10:26 crc kubenswrapper[4998]: I1208 19:10:26.868504 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33520: no serving certificate available for the kubelet" Dec 08 19:10:27 crc kubenswrapper[4998]: I1208 19:10:27.132172 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33532: no serving certificate available for the kubelet" Dec 08 19:10:27 crc kubenswrapper[4998]: I1208 19:10:27.375647 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33540: no serving certificate available for the kubelet" Dec 08 19:10:27 crc kubenswrapper[4998]: I1208 19:10:27.612856 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33548: no serving certificate available for the kubelet" Dec 08 19:10:27 crc kubenswrapper[4998]: I1208 19:10:27.875181 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33556: no serving certificate available for the kubelet" Dec 08 19:10:28 crc kubenswrapper[4998]: I1208 19:10:28.143536 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33572: no serving certificate available for the kubelet" Dec 08 19:10:28 crc kubenswrapper[4998]: I1208 19:10:28.389006 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33584: no serving certificate available for the kubelet" Dec 08 19:10:28 crc kubenswrapper[4998]: I1208 19:10:28.627597 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33592: no serving certificate available for the kubelet" Dec 08 19:10:28 crc kubenswrapper[4998]: I1208 19:10:28.888786 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33600: no serving certificate available for the kubelet" Dec 08 19:10:29 crc kubenswrapper[4998]: I1208 19:10:29.127286 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33604: no serving certificate available for the kubelet" Dec 08 19:10:29 crc kubenswrapper[4998]: I1208 19:10:29.390707 4998 ???:1] "http: TLS handshake error from 192.168.126.11:33606: no serving certificate available for the kubelet" Dec 08 19:10:31 crc kubenswrapper[4998]: I1208 19:10:31.233525 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:10:31 crc kubenswrapper[4998]: I1208 19:10:31.234518 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:10:41 crc kubenswrapper[4998]: I1208 19:10:41.721465 4998 ???:1] "http: TLS handshake error from 192.168.126.11:49144: no serving certificate available for the kubelet" Dec 08 19:10:41 crc kubenswrapper[4998]: I1208 19:10:41.985779 4998 ???:1] "http: TLS handshake error from 192.168.126.11:49160: no serving certificate available for the kubelet" Dec 08 19:10:42 crc kubenswrapper[4998]: I1208 19:10:42.235385 4998 ???:1] "http: TLS handshake error from 192.168.126.11:49174: no serving certificate available for the kubelet" Dec 08 19:11:01 crc kubenswrapper[4998]: I1208 19:11:01.234150 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:11:01 crc kubenswrapper[4998]: I1208 19:11:01.235065 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:11:07 crc kubenswrapper[4998]: I1208 19:11:07.227818 4998 ???:1] "http: TLS handshake error from 192.168.126.11:49818: no serving certificate available for the kubelet" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.629675 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vqj4w/must-gather-zr8md"] Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.630903 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-ceilometer" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.630934 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-ceilometer" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.630976 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="666a1ff3-3005-44df-97c5-48bd1d5457cc" containerName="curl" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.630983 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="666a1ff3-3005-44df-97c5-48bd1d5457cc" containerName="curl" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.631000 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-collectd" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.631005 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-collectd" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.631179 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="666a1ff3-3005-44df-97c5-48bd1d5457cc" containerName="curl" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.631200 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-collectd" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.631217 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="0878da2f-3188-47a0-a7c6-d4ab55a5dcfe" containerName="smoketest-ceilometer" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.635033 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.637956 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-vqj4w\"/\"openshift-service-ca.crt\"" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.638215 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-vqj4w\"/\"default-dockercfg-2vfz9\"" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.643025 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-vqj4w\"/\"kube-root-ca.crt\"" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.666038 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vqj4w/must-gather-zr8md"] Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.823541 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drnjn\" (UniqueName: \"kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.823603 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.925657 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-drnjn\" (UniqueName: \"kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.926143 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.926635 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:16 crc kubenswrapper[4998]: I1208 19:11:16.959015 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-drnjn\" (UniqueName: \"kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn\") pod \"must-gather-zr8md\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:17 crc kubenswrapper[4998]: I1208 19:11:17.258352 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:11:17 crc kubenswrapper[4998]: I1208 19:11:17.557387 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vqj4w/must-gather-zr8md"] Dec 08 19:11:18 crc kubenswrapper[4998]: I1208 19:11:18.509980 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqj4w/must-gather-zr8md" event={"ID":"8a5378cd-249b-4593-bbfd-c9e3baa0a27a","Type":"ContainerStarted","Data":"8dff59ee97c0602c2f414bd9e0f225ce55e41f1b6556ad3a025e498769167923"} Dec 08 19:11:24 crc kubenswrapper[4998]: I1208 19:11:24.557362 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqj4w/must-gather-zr8md" event={"ID":"8a5378cd-249b-4593-bbfd-c9e3baa0a27a","Type":"ContainerStarted","Data":"16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08"} Dec 08 19:11:24 crc kubenswrapper[4998]: I1208 19:11:24.557908 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqj4w/must-gather-zr8md" event={"ID":"8a5378cd-249b-4593-bbfd-c9e3baa0a27a","Type":"ContainerStarted","Data":"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7"} Dec 08 19:11:24 crc kubenswrapper[4998]: I1208 19:11:24.580274 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vqj4w/must-gather-zr8md" podStartSLOduration=2.58615914 podStartE2EDuration="8.580253559s" podCreationTimestamp="2025-12-08 19:11:16 +0000 UTC" firstStartedPulling="2025-12-08 19:11:17.566200014 +0000 UTC m=+1181.214242704" lastFinishedPulling="2025-12-08 19:11:23.560294433 +0000 UTC m=+1187.208337123" observedRunningTime="2025-12-08 19:11:24.577565286 +0000 UTC m=+1188.225607986" watchObservedRunningTime="2025-12-08 19:11:24.580253559 +0000 UTC m=+1188.228296249" Dec 08 19:11:26 crc kubenswrapper[4998]: I1208 19:11:26.616942 4998 ???:1] "http: TLS handshake error from 192.168.126.11:45042: no serving certificate available for the kubelet" Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.233649 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.234090 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.234171 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.235147 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.235219 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca" gracePeriod=600 Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.608138 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca" exitCode=0 Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.608217 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca"} Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.608570 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"94d843dcabdf4cce0a5c2e4d564d81b9d3ed7654e4e120ce35acecd700f22716"} Dec 08 19:11:31 crc kubenswrapper[4998]: I1208 19:11:31.608613 4998 scope.go:117] "RemoveContainer" containerID="399965a7144abb509267fb453f1ab207f97f84a712211b414db1beb1f13515d8" Dec 08 19:11:38 crc kubenswrapper[4998]: I1208 19:11:38.045274 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:11:38 crc kubenswrapper[4998]: I1208 19:11:38.050202 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-72nfz_88f11a4e-e168-4ddd-bb7b-7eb4ddd4c9aa/kube-multus/0.log" Dec 08 19:11:38 crc kubenswrapper[4998]: I1208 19:11:38.058180 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:11:38 crc kubenswrapper[4998]: I1208 19:11:38.061486 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.477317 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.490641 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.492475 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.591973 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghr56\" (UniqueName: \"kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56\") pod \"infrawatch-operators-kg2lt\" (UID: \"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7\") " pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.693131 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ghr56\" (UniqueName: \"kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56\") pod \"infrawatch-operators-kg2lt\" (UID: \"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7\") " pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.716289 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghr56\" (UniqueName: \"kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56\") pod \"infrawatch-operators-kg2lt\" (UID: \"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7\") " pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:01 crc kubenswrapper[4998]: I1208 19:12:01.828787 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:02 crc kubenswrapper[4998]: I1208 19:12:02.097348 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:02 crc kubenswrapper[4998]: I1208 19:12:02.106588 4998 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Dec 08 19:12:02 crc kubenswrapper[4998]: I1208 19:12:02.972067 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kg2lt" event={"ID":"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7","Type":"ContainerStarted","Data":"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6"} Dec 08 19:12:02 crc kubenswrapper[4998]: I1208 19:12:02.973295 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kg2lt" event={"ID":"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7","Type":"ContainerStarted","Data":"f4c5d55735b9435e02aaa30eef73cb6b15f6a936312a2bf05cb7454837b9c3ac"} Dec 08 19:12:02 crc kubenswrapper[4998]: I1208 19:12:02.994428 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-kg2lt" podStartSLOduration=1.448593838 podStartE2EDuration="1.994385546s" podCreationTimestamp="2025-12-08 19:12:01 +0000 UTC" firstStartedPulling="2025-12-08 19:12:02.106869256 +0000 UTC m=+1225.754911936" lastFinishedPulling="2025-12-08 19:12:02.652660954 +0000 UTC m=+1226.300703644" observedRunningTime="2025-12-08 19:12:02.990333696 +0000 UTC m=+1226.638376386" watchObservedRunningTime="2025-12-08 19:12:02.994385546 +0000 UTC m=+1226.642428246" Dec 08 19:12:07 crc kubenswrapper[4998]: I1208 19:12:07.375345 4998 ???:1] "http: TLS handshake error from 192.168.126.11:51558: no serving certificate available for the kubelet" Dec 08 19:12:07 crc kubenswrapper[4998]: I1208 19:12:07.582343 4998 ???:1] "http: TLS handshake error from 192.168.126.11:51572: no serving certificate available for the kubelet" Dec 08 19:12:07 crc kubenswrapper[4998]: I1208 19:12:07.587599 4998 ???:1] "http: TLS handshake error from 192.168.126.11:51582: no serving certificate available for the kubelet" Dec 08 19:12:11 crc kubenswrapper[4998]: I1208 19:12:11.829303 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:11 crc kubenswrapper[4998]: I1208 19:12:11.831925 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:11 crc kubenswrapper[4998]: I1208 19:12:11.870444 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:12 crc kubenswrapper[4998]: I1208 19:12:12.069452 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:12 crc kubenswrapper[4998]: I1208 19:12:12.122238 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:14 crc kubenswrapper[4998]: I1208 19:12:14.060299 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-kg2lt" podUID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" containerName="registry-server" containerID="cri-o://fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6" gracePeriod=2 Dec 08 19:12:14 crc kubenswrapper[4998]: I1208 19:12:14.507328 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:14 crc kubenswrapper[4998]: I1208 19:12:14.646391 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghr56\" (UniqueName: \"kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56\") pod \"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7\" (UID: \"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7\") " Dec 08 19:12:14 crc kubenswrapper[4998]: I1208 19:12:14.667259 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56" (OuterVolumeSpecName: "kube-api-access-ghr56") pod "dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" (UID: "dfd9f7d8-c1b9-41d1-bf9a-810458d875e7"). InnerVolumeSpecName "kube-api-access-ghr56". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:12:14 crc kubenswrapper[4998]: I1208 19:12:14.751399 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ghr56\" (UniqueName: \"kubernetes.io/projected/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7-kube-api-access-ghr56\") on node \"crc\" DevicePath \"\"" Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.076071 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kg2lt" event={"ID":"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7","Type":"ContainerDied","Data":"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6"} Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.076175 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-kg2lt" Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.076215 4998 scope.go:117] "RemoveContainer" containerID="fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6" Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.076493 4998 generic.go:358] "Generic (PLEG): container finished" podID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" containerID="fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6" exitCode=0 Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.077018 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-kg2lt" event={"ID":"dfd9f7d8-c1b9-41d1-bf9a-810458d875e7","Type":"ContainerDied","Data":"f4c5d55735b9435e02aaa30eef73cb6b15f6a936312a2bf05cb7454837b9c3ac"} Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.127484 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.128084 4998 scope.go:117] "RemoveContainer" containerID="fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6" Dec 08 19:12:15 crc kubenswrapper[4998]: E1208 19:12:15.129057 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6\": container with ID starting with fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6 not found: ID does not exist" containerID="fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6" Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.129133 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6"} err="failed to get container status \"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6\": rpc error: code = NotFound desc = could not find container \"fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6\": container with ID starting with fda86c324f10a27e35ee066d2f7ac52bf8463d315e04a978c9e1f9b473e70bf6 not found: ID does not exist" Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.136119 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-kg2lt"] Dec 08 19:12:15 crc kubenswrapper[4998]: I1208 19:12:15.377710 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" path="/var/lib/kubelet/pods/dfd9f7d8-c1b9-41d1-bf9a-810458d875e7/volumes" Dec 08 19:12:21 crc kubenswrapper[4998]: I1208 19:12:21.250508 4998 ???:1] "http: TLS handshake error from 192.168.126.11:55056: no serving certificate available for the kubelet" Dec 08 19:12:21 crc kubenswrapper[4998]: I1208 19:12:21.358477 4998 ???:1] "http: TLS handshake error from 192.168.126.11:55058: no serving certificate available for the kubelet" Dec 08 19:12:21 crc kubenswrapper[4998]: I1208 19:12:21.488012 4998 ???:1] "http: TLS handshake error from 192.168.126.11:55070: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.169717 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46314: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.360181 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46328: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.381016 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46344: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.395948 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46354: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.594728 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46362: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.628107 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46364: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.633963 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46376: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.821848 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46390: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.968798 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46392: no serving certificate available for the kubelet" Dec 08 19:12:37 crc kubenswrapper[4998]: I1208 19:12:37.971839 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46396: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.008159 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46410: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.169991 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46412: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.208213 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46422: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.209754 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46434: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.339272 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46436: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.571282 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46446: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.592188 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46460: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.601990 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46468: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.711848 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46484: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.745761 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46488: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.783054 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46502: no serving certificate available for the kubelet" Dec 08 19:12:38 crc kubenswrapper[4998]: I1208 19:12:38.909806 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46518: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.082829 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46534: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.101022 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46544: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.106299 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46560: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.300750 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46570: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.327926 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46578: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.335969 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46592: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.467671 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46596: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.678911 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46610: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.723341 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46622: no serving certificate available for the kubelet" Dec 08 19:12:39 crc kubenswrapper[4998]: I1208 19:12:39.735381 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46632: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.232605 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46638: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.279908 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46654: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.291216 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46656: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.297458 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46664: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.484956 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46668: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.514996 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46682: no serving certificate available for the kubelet" Dec 08 19:12:40 crc kubenswrapper[4998]: I1208 19:12:40.829228 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46690: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.042216 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46698: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.055285 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46708: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.068122 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46724: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.119268 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46728: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.259472 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46740: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.431116 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46744: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.431802 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46754: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.472982 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46760: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.643886 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46770: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.676439 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46782: no serving certificate available for the kubelet" Dec 08 19:12:41 crc kubenswrapper[4998]: I1208 19:12:41.690709 4998 ???:1] "http: TLS handshake error from 192.168.126.11:46790: no serving certificate available for the kubelet" Dec 08 19:12:47 crc kubenswrapper[4998]: I1208 19:12:47.882625 4998 ???:1] "http: TLS handshake error from 192.168.126.11:57114: no serving certificate available for the kubelet" Dec 08 19:12:53 crc kubenswrapper[4998]: I1208 19:12:53.743817 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42240: no serving certificate available for the kubelet" Dec 08 19:12:53 crc kubenswrapper[4998]: I1208 19:12:53.974039 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42256: no serving certificate available for the kubelet" Dec 08 19:12:54 crc kubenswrapper[4998]: I1208 19:12:54.020295 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42266: no serving certificate available for the kubelet" Dec 08 19:12:54 crc kubenswrapper[4998]: I1208 19:12:54.142513 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42276: no serving certificate available for the kubelet" Dec 08 19:12:54 crc kubenswrapper[4998]: I1208 19:12:54.355901 4998 ???:1] "http: TLS handshake error from 192.168.126.11:42290: no serving certificate available for the kubelet" Dec 08 19:13:31 crc kubenswrapper[4998]: I1208 19:13:31.233340 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:13:31 crc kubenswrapper[4998]: I1208 19:13:31.234186 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:13:34 crc kubenswrapper[4998]: I1208 19:13:34.798225 4998 generic.go:358] "Generic (PLEG): container finished" podID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerID="687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7" exitCode=0 Dec 08 19:13:34 crc kubenswrapper[4998]: I1208 19:13:34.798328 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vqj4w/must-gather-zr8md" event={"ID":"8a5378cd-249b-4593-bbfd-c9e3baa0a27a","Type":"ContainerDied","Data":"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7"} Dec 08 19:13:34 crc kubenswrapper[4998]: I1208 19:13:34.799242 4998 scope.go:117] "RemoveContainer" containerID="687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.630780 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58120: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.753817 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58134: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.764818 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58150: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.786400 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58164: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.797595 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58176: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.812083 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58186: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.825477 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58196: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.841510 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58202: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.857964 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58210: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.967139 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58224: no serving certificate available for the kubelet" Dec 08 19:13:36 crc kubenswrapper[4998]: I1208 19:13:36.980342 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58240: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.002517 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58256: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.015876 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58266: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.101274 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58274: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.113355 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58282: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.127418 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58288: no serving certificate available for the kubelet" Dec 08 19:13:37 crc kubenswrapper[4998]: I1208 19:13:37.138016 4998 ???:1] "http: TLS handshake error from 192.168.126.11:58290: no serving certificate available for the kubelet" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.177806 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-vqj4w/must-gather-zr8md"] Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.178791 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-vqj4w/must-gather-zr8md" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="copy" containerID="cri-o://16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08" gracePeriod=2 Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.182270 4998 status_manager.go:895] "Failed to get status for pod" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" pod="openshift-must-gather-vqj4w/must-gather-zr8md" err="pods \"must-gather-zr8md\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-vqj4w\": no relationship found between node 'crc' and this object" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.182386 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-vqj4w/must-gather-zr8md"] Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.657555 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vqj4w_must-gather-zr8md_8a5378cd-249b-4593-bbfd-c9e3baa0a27a/copy/0.log" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.658607 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.661119 4998 status_manager.go:895] "Failed to get status for pod" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" pod="openshift-must-gather-vqj4w/must-gather-zr8md" err="pods \"must-gather-zr8md\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-vqj4w\": no relationship found between node 'crc' and this object" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.715202 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output\") pod \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.715307 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-drnjn\" (UniqueName: \"kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn\") pod \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\" (UID: \"8a5378cd-249b-4593-bbfd-c9e3baa0a27a\") " Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.723620 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn" (OuterVolumeSpecName: "kube-api-access-drnjn") pod "8a5378cd-249b-4593-bbfd-c9e3baa0a27a" (UID: "8a5378cd-249b-4593-bbfd-c9e3baa0a27a"). InnerVolumeSpecName "kube-api-access-drnjn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.791234 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8a5378cd-249b-4593-bbfd-c9e3baa0a27a" (UID: "8a5378cd-249b-4593-bbfd-c9e3baa0a27a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.817708 4998 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-must-gather-output\") on node \"crc\" DevicePath \"\"" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.817762 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-drnjn\" (UniqueName: \"kubernetes.io/projected/8a5378cd-249b-4593-bbfd-c9e3baa0a27a-kube-api-access-drnjn\") on node \"crc\" DevicePath \"\"" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.877708 4998 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-vqj4w_must-gather-zr8md_8a5378cd-249b-4593-bbfd-c9e3baa0a27a/copy/0.log" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.878307 4998 generic.go:358] "Generic (PLEG): container finished" podID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerID="16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08" exitCode=143 Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.878652 4998 scope.go:117] "RemoveContainer" containerID="16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.878820 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vqj4w/must-gather-zr8md" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.883078 4998 status_manager.go:895] "Failed to get status for pod" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" pod="openshift-must-gather-vqj4w/must-gather-zr8md" err="pods \"must-gather-zr8md\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-vqj4w\": no relationship found between node 'crc' and this object" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.903858 4998 status_manager.go:895] "Failed to get status for pod" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" pod="openshift-must-gather-vqj4w/must-gather-zr8md" err="pods \"must-gather-zr8md\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-vqj4w\": no relationship found between node 'crc' and this object" Dec 08 19:13:42 crc kubenswrapper[4998]: I1208 19:13:42.914388 4998 scope.go:117] "RemoveContainer" containerID="687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7" Dec 08 19:13:43 crc kubenswrapper[4998]: I1208 19:13:43.006321 4998 scope.go:117] "RemoveContainer" containerID="16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08" Dec 08 19:13:43 crc kubenswrapper[4998]: E1208 19:13:43.007009 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08\": container with ID starting with 16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08 not found: ID does not exist" containerID="16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08" Dec 08 19:13:43 crc kubenswrapper[4998]: I1208 19:13:43.007082 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08"} err="failed to get container status \"16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08\": rpc error: code = NotFound desc = could not find container \"16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08\": container with ID starting with 16520b553d208eae784f239810fa75a1363cd276f09e72d7da000b672b4f4f08 not found: ID does not exist" Dec 08 19:13:43 crc kubenswrapper[4998]: I1208 19:13:43.007130 4998 scope.go:117] "RemoveContainer" containerID="687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7" Dec 08 19:13:43 crc kubenswrapper[4998]: E1208 19:13:43.007492 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7\": container with ID starting with 687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7 not found: ID does not exist" containerID="687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7" Dec 08 19:13:43 crc kubenswrapper[4998]: I1208 19:13:43.007523 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7"} err="failed to get container status \"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7\": rpc error: code = NotFound desc = could not find container \"687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7\": container with ID starting with 687a0afec54d054de11a4e511c70a9813a68f487f457419272e8bd901521d8e7 not found: ID does not exist" Dec 08 19:13:43 crc kubenswrapper[4998]: I1208 19:13:43.374835 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" path="/var/lib/kubelet/pods/8a5378cd-249b-4593-bbfd-c9e3baa0a27a/volumes" Dec 08 19:13:51 crc kubenswrapper[4998]: I1208 19:13:51.101064 4998 ???:1] "http: TLS handshake error from 192.168.126.11:40820: no serving certificate available for the kubelet" Dec 08 19:14:01 crc kubenswrapper[4998]: I1208 19:14:01.232755 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:14:01 crc kubenswrapper[4998]: I1208 19:14:01.233399 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.233039 4998 patch_prober.go:28] interesting pod/machine-config-daemon-gwq5q container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.233763 4998 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.233894 4998 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.234832 4998 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"94d843dcabdf4cce0a5c2e4d564d81b9d3ed7654e4e120ce35acecd700f22716"} pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.235081 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" podUID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerName="machine-config-daemon" containerID="cri-o://94d843dcabdf4cce0a5c2e4d564d81b9d3ed7654e4e120ce35acecd700f22716" gracePeriod=600 Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.381651 4998 generic.go:358] "Generic (PLEG): container finished" podID="0c186590-6bde-4b05-ac4d-9e6f0e656d17" containerID="94d843dcabdf4cce0a5c2e4d564d81b9d3ed7654e4e120ce35acecd700f22716" exitCode=0 Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.382570 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerDied","Data":"94d843dcabdf4cce0a5c2e4d564d81b9d3ed7654e4e120ce35acecd700f22716"} Dec 08 19:14:31 crc kubenswrapper[4998]: I1208 19:14:31.383373 4998 scope.go:117] "RemoveContainer" containerID="9197ce2eec0e3ce53183793c8626334b804733dc167cba264b6d177f17bd02ca" Dec 08 19:14:32 crc kubenswrapper[4998]: I1208 19:14:32.392883 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-gwq5q" event={"ID":"0c186590-6bde-4b05-ac4d-9e6f0e656d17","Type":"ContainerStarted","Data":"066842a09e59bae5282a59327a19abff9cf99486ce2a67157f4f403da6baf101"} Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.413386 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414824 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="gather" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414875 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="gather" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414913 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" containerName="registry-server" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414921 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" containerName="registry-server" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414937 4998 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="copy" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.414944 4998 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="copy" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.415116 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="gather" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.415146 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="dfd9f7d8-c1b9-41d1-bf9a-810458d875e7" containerName="registry-server" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.415160 4998 memory_manager.go:356] "RemoveStaleState removing state" podUID="8a5378cd-249b-4593-bbfd-c9e3baa0a27a" containerName="copy" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.420775 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.438420 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.518226 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwmjf\" (UniqueName: \"kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.518364 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.518420 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.619555 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bwmjf\" (UniqueName: \"kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.620535 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.620718 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.621672 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.622020 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.646981 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwmjf\" (UniqueName: \"kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf\") pod \"redhat-operators-2hph2\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:56 crc kubenswrapper[4998]: I1208 19:14:56.756937 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:14:57 crc kubenswrapper[4998]: I1208 19:14:57.041424 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:14:57 crc kubenswrapper[4998]: I1208 19:14:57.876225 4998 generic.go:358] "Generic (PLEG): container finished" podID="1f69bf89-25ed-4b7e-a881-004cd09e4a68" containerID="fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170" exitCode=0 Dec 08 19:14:57 crc kubenswrapper[4998]: I1208 19:14:57.876288 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerDied","Data":"fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170"} Dec 08 19:14:57 crc kubenswrapper[4998]: I1208 19:14:57.876596 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerStarted","Data":"dea1785458a156dadb979f485ca636c66b3cbf97dad8fc3c6fcbfd84d54c5f67"} Dec 08 19:14:59 crc kubenswrapper[4998]: I1208 19:14:59.897002 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerStarted","Data":"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418"} Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.165877 4998 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h"] Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.177347 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.181918 4998 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.184784 4998 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.191878 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h"] Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.294275 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.294718 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pd2f\" (UniqueName: \"kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.298984 4998 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.400242 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.400936 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5pd2f\" (UniqueName: \"kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.401063 4998 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.401999 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.425826 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.433562 4998 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pd2f\" (UniqueName: \"kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f\") pod \"collect-profiles-29420355-4bw9h\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.506321 4998 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.763960 4998 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h"] Dec 08 19:15:00 crc kubenswrapper[4998]: W1208 19:15:00.770313 4998 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode549280a_859d_454d_aace_8b8b66423c48.slice/crio-f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef WatchSource:0}: Error finding container f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef: Status 404 returned error can't find the container with id f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.923572 4998 generic.go:358] "Generic (PLEG): container finished" podID="1f69bf89-25ed-4b7e-a881-004cd09e4a68" containerID="48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418" exitCode=0 Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.923962 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerDied","Data":"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418"} Dec 08 19:15:00 crc kubenswrapper[4998]: I1208 19:15:00.952066 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" event={"ID":"e549280a-859d-454d-aace-8b8b66423c48","Type":"ContainerStarted","Data":"f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef"} Dec 08 19:15:01 crc kubenswrapper[4998]: I1208 19:15:01.974848 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerStarted","Data":"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e"} Dec 08 19:15:01 crc kubenswrapper[4998]: I1208 19:15:01.978845 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" event={"ID":"e549280a-859d-454d-aace-8b8b66423c48","Type":"ContainerDied","Data":"dc4bcc0b918ec4dfb486aa920e86e9ddc60b579263b965b9068dc80f98e25f91"} Dec 08 19:15:01 crc kubenswrapper[4998]: I1208 19:15:01.979131 4998 generic.go:358] "Generic (PLEG): container finished" podID="e549280a-859d-454d-aace-8b8b66423c48" containerID="dc4bcc0b918ec4dfb486aa920e86e9ddc60b579263b965b9068dc80f98e25f91" exitCode=0 Dec 08 19:15:01 crc kubenswrapper[4998]: I1208 19:15:01.999790 4998 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2hph2" podStartSLOduration=4.73257357 podStartE2EDuration="5.999749776s" podCreationTimestamp="2025-12-08 19:14:56 +0000 UTC" firstStartedPulling="2025-12-08 19:14:57.877353944 +0000 UTC m=+1401.525396634" lastFinishedPulling="2025-12-08 19:14:59.14453014 +0000 UTC m=+1402.792572840" observedRunningTime="2025-12-08 19:15:01.995221623 +0000 UTC m=+1405.643264323" watchObservedRunningTime="2025-12-08 19:15:01.999749776 +0000 UTC m=+1405.647792486" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.230095 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.312244 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume\") pod \"e549280a-859d-454d-aace-8b8b66423c48\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.312403 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume\") pod \"e549280a-859d-454d-aace-8b8b66423c48\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.312503 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pd2f\" (UniqueName: \"kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f\") pod \"e549280a-859d-454d-aace-8b8b66423c48\" (UID: \"e549280a-859d-454d-aace-8b8b66423c48\") " Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.313158 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume" (OuterVolumeSpecName: "config-volume") pod "e549280a-859d-454d-aace-8b8b66423c48" (UID: "e549280a-859d-454d-aace-8b8b66423c48"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.333937 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f" (OuterVolumeSpecName: "kube-api-access-5pd2f") pod "e549280a-859d-454d-aace-8b8b66423c48" (UID: "e549280a-859d-454d-aace-8b8b66423c48"). InnerVolumeSpecName "kube-api-access-5pd2f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.357749 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e549280a-859d-454d-aace-8b8b66423c48" (UID: "e549280a-859d-454d-aace-8b8b66423c48"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.414186 4998 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e549280a-859d-454d-aace-8b8b66423c48-config-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.414217 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5pd2f\" (UniqueName: \"kubernetes.io/projected/e549280a-859d-454d-aace-8b8b66423c48-kube-api-access-5pd2f\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.414229 4998 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e549280a-859d-454d-aace-8b8b66423c48-secret-volume\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.997009 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" event={"ID":"e549280a-859d-454d-aace-8b8b66423c48","Type":"ContainerDied","Data":"f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef"} Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.997443 4998 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f6b7af57feb05b896a008f19b5d389cd9bde45bad18d7fb2f7065082df5ed8ef" Dec 08 19:15:03 crc kubenswrapper[4998]: I1208 19:15:03.997341 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29420355-4bw9h" Dec 08 19:15:06 crc kubenswrapper[4998]: I1208 19:15:06.757271 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:06 crc kubenswrapper[4998]: I1208 19:15:06.757604 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:06 crc kubenswrapper[4998]: I1208 19:15:06.825528 4998 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:07 crc kubenswrapper[4998]: I1208 19:15:07.070302 4998 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:07 crc kubenswrapper[4998]: I1208 19:15:07.996522 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.040525 4998 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2hph2" podUID="1f69bf89-25ed-4b7e-a881-004cd09e4a68" containerName="registry-server" containerID="cri-o://62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e" gracePeriod=2 Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.430607 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.444639 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwmjf\" (UniqueName: \"kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf\") pod \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.444979 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content\") pod \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.445078 4998 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities\") pod \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\" (UID: \"1f69bf89-25ed-4b7e-a881-004cd09e4a68\") " Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.455047 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf" (OuterVolumeSpecName: "kube-api-access-bwmjf") pod "1f69bf89-25ed-4b7e-a881-004cd09e4a68" (UID: "1f69bf89-25ed-4b7e-a881-004cd09e4a68"). InnerVolumeSpecName "kube-api-access-bwmjf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.455077 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities" (OuterVolumeSpecName: "utilities") pod "1f69bf89-25ed-4b7e-a881-004cd09e4a68" (UID: "1f69bf89-25ed-4b7e-a881-004cd09e4a68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.546672 4998 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-utilities\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.546913 4998 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bwmjf\" (UniqueName: \"kubernetes.io/projected/1f69bf89-25ed-4b7e-a881-004cd09e4a68-kube-api-access-bwmjf\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.581303 4998 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1f69bf89-25ed-4b7e-a881-004cd09e4a68" (UID: "1f69bf89-25ed-4b7e-a881-004cd09e4a68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Dec 08 19:15:09 crc kubenswrapper[4998]: I1208 19:15:09.648381 4998 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1f69bf89-25ed-4b7e-a881-004cd09e4a68-catalog-content\") on node \"crc\" DevicePath \"\"" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.050216 4998 generic.go:358] "Generic (PLEG): container finished" podID="1f69bf89-25ed-4b7e-a881-004cd09e4a68" containerID="62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e" exitCode=0 Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.050415 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerDied","Data":"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e"} Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.050444 4998 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2hph2" event={"ID":"1f69bf89-25ed-4b7e-a881-004cd09e4a68","Type":"ContainerDied","Data":"dea1785458a156dadb979f485ca636c66b3cbf97dad8fc3c6fcbfd84d54c5f67"} Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.050470 4998 scope.go:117] "RemoveContainer" containerID="62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.050627 4998 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2hph2" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.093195 4998 scope.go:117] "RemoveContainer" containerID="48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.105412 4998 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.110563 4998 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2hph2"] Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.127892 4998 scope.go:117] "RemoveContainer" containerID="fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.166094 4998 scope.go:117] "RemoveContainer" containerID="62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e" Dec 08 19:15:10 crc kubenswrapper[4998]: E1208 19:15:10.167465 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e\": container with ID starting with 62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e not found: ID does not exist" containerID="62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.167524 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e"} err="failed to get container status \"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e\": rpc error: code = NotFound desc = could not find container \"62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e\": container with ID starting with 62872416347d2efaa5c99e3a6eae1c089e25ddb5b7c8be0b22a9d22e9ad3f08e not found: ID does not exist" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.167562 4998 scope.go:117] "RemoveContainer" containerID="48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418" Dec 08 19:15:10 crc kubenswrapper[4998]: E1208 19:15:10.168467 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418\": container with ID starting with 48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418 not found: ID does not exist" containerID="48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.168534 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418"} err="failed to get container status \"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418\": rpc error: code = NotFound desc = could not find container \"48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418\": container with ID starting with 48c41c6578d15a70e9f424158edd78b3b95521ac864c01432a4bc434ff884418 not found: ID does not exist" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.168578 4998 scope.go:117] "RemoveContainer" containerID="fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170" Dec 08 19:15:10 crc kubenswrapper[4998]: E1208 19:15:10.170190 4998 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170\": container with ID starting with fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170 not found: ID does not exist" containerID="fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170" Dec 08 19:15:10 crc kubenswrapper[4998]: I1208 19:15:10.170247 4998 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170"} err="failed to get container status \"fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170\": rpc error: code = NotFound desc = could not find container \"fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170\": container with ID starting with fc7930f9970d052a7057b164568ab2c3c5a41fd3589296a8b8d72b94051a1170 not found: ID does not exist" Dec 08 19:15:11 crc kubenswrapper[4998]: I1208 19:15:11.378816 4998 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f69bf89-25ed-4b7e-a881-004cd09e4a68" path="/var/lib/kubelet/pods/1f69bf89-25ed-4b7e-a881-004cd09e4a68/volumes" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515115621750024450 0ustar coreroot‹íÁ  ÷Om7 €7šÞ'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015115621751017366 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015115616502016507 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015115616502015457 5ustar corecore